Hofstra University, Hempstead, New York 11550
Fred DeMatteis School of Engineering and Applied Science,
Department of Computer Science
Weed Hall, Rm 201 :
Mobile: (516) 428-5041
Email: Steven.C.Lindo@hofstra.edu
My Background and Experience
I attend Hofstra University where I earned a Bachelors and a Masters Degree in Computer Science. I later earned my Doctor of Professional Studies, Doctorate from Pace University, Seidenberg School of Computer Science. I worked for 14 Year for (Thomson) Reuters and for the past 16 years I have been working for Wolters Kluwer. I started teaching at Hofstra University in 2015. I have taught the following:
Undergraduate Courses
Graduate Courses
My Philosophy: Computer Science is the study of all things computing. It is the study of systems, processes and methods for the purpose of delivering computing solutions to problems. It is a discipline in engineering for analysis, designing, and developing and delivering technology.
"If we knew what it was we were doing, it would not be called research" - Albert Einstein
Publications
Dissertation Abstract
Link to full dissertationA Comparative Study of Collaborative Filtering Recommendation Systems Using Algorithms to Impute Large Sparse Matrices
In the modern era of computing, recommendation systems are a key component for enterprise systems and consumer applications including e-commerce and web applications. The challenge for these systems is the accuracy and quality of the calculations especially when dealing with sparse amounts of data. This research provides an empirical study of the problems associated with a sparse matrix when encountered by collaborative filtering recommendation systems. The research conducts a comparative analysis of different algorithms used to address the issue of sparse data while trying to predict and prescribe (recommend) the optimal choice to users. The research will compare statistical techniques used to impute missing data with other estimations techniques for predicting missing values. The research will show why Matrix Factorization, Maximum Likelihood Estimation (MLE) or Gradient Optimization methods work better for large sparse matrices, over simple mean, sub-group mean or regression methods.