.
IN-V-BAT-AI ends math and calculator learning loss - so you never forget.
How our brain neurons remember?
Remember on demand is now possible!
import pandas as pd
import numpy as np
import scipy as sp
import warnings
import mpl_toolkits.mplot3d.axes3d as p3
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from scipy import fftpack
from scipy import integrate
# comment # Do shift + enter
# comment : Hierarchal multi-indexing is a powerful data analyis and manipulation tool
# comment : Show me how it works?
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = list(zip(*arrays))
tuples
# comment # Do shift + enter
# comment : create a multiIndex column fieldnames
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
index
# comment # Do shift + enter
# comment : first column will be indexed and the second column will be indexed also
# comment : create 8 random values assign to first two columns
s = pd.Series(np.random.randn(8), index=index)
s
# comment # Do shift + enter
# comment : create an array list
arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
arrays
s = pd.Series(np.random.randn(8), index=arrays)
s
# comment # Do shift + enter
# comment : create a DataFrame table with 8 rows and 4 columns using random number generator
# comment : then use arrays table in index format. Then display the newly created table.
# comment : The code below satisfy the above requires.
df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
df
# comment # Do shift + enter
# comment :
df.index.names
# comment # Do shift + enter
# comment : Create a new DataFrame name = df with rows and 8 columns
bar = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index)
bar
# comment # Do shift + enter
# comment : Plot the bar chart of dataframe name = bar1
bar.plot.bar(figsize=(12,4))
# comment # Do shift + enter
# comment :
bar2 = pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
bar2
# comment # Do shift + enter
# comment : Plot the bar chart of dataframe name = bar2
bar2.plot.bar(figsize=(12,4))
# comment : linear regression model using ordinary least square method (OLS)
# comment : Use random number generator seed(9876789) resulted to R-squared = 0.999
# comment : Use random number generator seed(25439587) resulted to R-squared = 0.954
# comment : Use random number generator seed(0) resulted to R-squared = 0.928
np.random.seed(0)
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
X = sm.add_constant(X)
y = np.dot(X, beta) + e
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
model = sm.OLS(y, X)
results = model.fit()
res = sm.OLS(y, X).fit()
print(results.summary())
print('')
print('Parameters: ', results.params)
print('R-squared result (R^2): ', results.rsquared)
# comment # Do shift + enter
# comment : show me the plot
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
# comment # Do shift + enter
# comment : I need you to learn how to extract sample data sets statmodel module
# comment show me how to use the least Absolute Deviation (LAD) quantile regression
data = sm.datasets.engel.load_pandas().data
data.head()
# comment # Do shift + enter
# comment : "res" is the abbreviation for regression analysis
mod = smf.quantreg('foodexp ~ income', data)
res = mod.fit(q=.5)
print(res.summary())
# comment # Do shift + enter
# comment :
quantiles = np.arange(.05, .96, .1)
def fit_model(q):
res = mod.fit(q=q)
return [q, res.params['Intercept'], res.params['income']] + \
res.conf_int().loc['income'].tolist()
models = [fit_model(x) for x in quantiles]
models = pd.DataFrame(models, columns=['q', 'a', 'b', 'lb', 'ub'])
ols = smf.ols('foodexp ~ income', data).fit()
ols_ci = ols.conf_int().loc['income'].tolist()
ols = dict(a = ols.params['Intercept'],
b = ols.params['income'],
lb = ols_ci[0],
ub = ols_ci[1])
print(models)
print(ols)
# comment # Do shift + enter
# comment :
x = np.arange(data.income.min(), data.income.max(), 50)
get_y = lambda a, b: a + b * x
fig3, ax = plt.subplots(figsize=(12, 6))
for i in range(models.shape[0]):
y = get_y(models.a[i], models.b[i])
ax.plot(x, y, linestyle='dotted', color='grey')
y = get_y(ols['a'], ols['b'])
ax.plot(x, y, color='red', label='OLS')
ax.scatter(data.income, data.foodexp, alpha=.2)
ax.set_xlim((240, 3000))
ax.set_ylim((240, 2000))
legend = ax.legend()
ax.set_xlabel('Income', fontsize=16)
ax.set_ylabel('Food expenditure', fontsize=16);
# comment # Do shift + enter
# comment :
fig4, ax = plt.subplots(figsize=(12, 6))
n = models.shape[0]
p1 = plt.plot(models.q, models.b, color='black', label='Quantile Reg.')
p2 = plt.plot(models.q, models.ub, linestyle='dotted', color='black')
p3 = plt.plot(models.q, models.lb, linestyle='dotted', color='black')
p4 = plt.plot(models.q, [ols['b']] * n, color='red', label='OLS')
p5 = plt.plot(models.q, [ols['lb']] * n, linestyle='dotted', color='red')
p6 = plt.plot(models.q, [ols['ub']] * n, linestyle='dotted', color='red')
plt.ylabel(r'$\beta_{income}$')
plt.xlabel('Quantiles of the conditional food expenditure distribution')
plt.legend()
plt.show()
# comment # Do shift + enter
# comment : The RecursiveLS class allows computation of recursive residuals and computes CUSUM and
# CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically
# significant deviations from the null hypothesis of stable parameters allows an easy visual indication of
# parameter stability.
print(sm.datasets.copper.DESCRLONG)
dta = sm.datasets.copper.load_pandas().data
dta.index = pd.date_range('1951-01-01', '1975-01-01', freq='AS')
endog = dta['WORLDCONSUMPTION']
# To the regressors in the dataset, we add a column of ones for an intercept
exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']])
# comment # Do shift + enter
# comment : summary table only presents the regression parameters estimated on the entire sample;
# except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
# comment # Do shift + enter
# comment : Plot the recursive coefficient
print(res.recursive_coefficients.filtered[0])
res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10,6));
# comment # Do shift + enter
# comment : In the plot below, the CUSUM statistic does not move outside of the 5% significance bands,
# so we fail to reject the null hypothesis of stable parameters at the 5% level.
print(res.cusum)
fig = res.plot_cusum(figsize=(12,4));
# comment # Do shift + enter
# comment : In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands,
# so we fail to reject the null hypothesis of stable parameters at the 5% level.
res.plot_cusum_squares(figsize=(12,4));
# comment # Do shift + enter
PREVIOUS LESSON 12 NEXT LESSON 14
.
.
The IN-V-BAT-AI solution can be a valuable tool in classrooms, enhancing both teaching and learning experience. Here are some ways it can be utilized:
⋆ Personalized Learning : By storing and retrieving knowledge in the cloud, students can access tailored resources and revisit
concepts they struggle with, ensuring a more individualized learning journey.
⋆ Memory Support : The tool helps students recall information even when stress or distractions hinder their memory, making it
easier to retain and apply knowledge during homework assignments or projects.
⋆ Bridging Learning Gaps : It addresses learning loss by providing consistent access to educational materials, ensuring that
students who miss lessons can catch up effectively.
⋆ Teacher Assistance : Educators can use the tool to provide targeted interventions to support learning.
⋆ Stress Reduction : By alleviating the pressure of memorization, students can focus on understanding and applying concepts,
fostering a deeper engagement with the material.
.
.
.
Year | Top 10 countries | Pages visited |
2023 | 1. USA 2. Great Britain 3. Germany 4. Canada 5. Iran 6. Netherlands 7. India 8. China 9. Australia 10. Philippines | 127,256 Pages / 27,541 Visitors |
2024 | 1. USA 2. China 3. Canada 4. Poland 5. India 6. Philippines 7. Great Britain 8. Australia 9. Indonesia 10. Russia | 164,130 Pages / 40,724 Visitors |
Daily Site Visitor Ranking 7/9/2025 | 1. USA 2. India 3. Iran 4. Latvia 5. Canada 6. Russia 7. Lithuania 8. Poland 9. Belize 10. China | Year to Date 100,343 Pages / 29,569 Visitors |