<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>不一样的烟火</title>
  
  <subtitle>嗨~别来无恙</subtitle>
  <link href="/atom.xml" rel="self"/>
  
  <link href="http://yoursite.com/"/>
  <updated>2019-07-18T05:51:54.583Z</updated>
  <id>http://yoursite.com/</id>
  
  <author>
    <name>不一样的烟火</name>
    
  </author>
  
  <generator uri="http://hexo.io/">Hexo</generator>
  
  <entry>
    <title>什么是SparseReward</title>
    <link href="http://yoursite.com/2019/07/18/SparseReward/"/>
    <id>http://yoursite.com/2019/07/18/SparseReward/</id>
    <published>2019-07-18T03:50:59.000Z</published>
    <updated>2019-07-18T05:51:54.583Z</updated>
    
    <content type="html"><![CDATA[<p>​        人们学习的过程中，常常无法及时获得回报。就像家长让小朋友写作业，小朋友可能觉得这个是负面的反馈而不去写作业（做作业让我觉得很痛苦qwq），而没有意识到以后会获得的巨大回报：写完作业后成绩提高，考上好大学，成为高富帅，从此走向巅峰赢取白富美…</p><h3 id="这个一开始的暂时的小的reward-就叫-Sparse-Reward"><a href="#这个一开始的暂时的小的reward-就叫-Sparse-Reward" class="headerlink" title="这个一开始的暂时的小的reward  就叫  Sparse Reward"></a>这个一开始的暂时的小的reward  就叫  Sparse Reward</h3><h2 id="如何让agent在Sparse-Reward-中拥有更好的学习表现？"><a href="#如何让agent在Sparse-Reward-中拥有更好的学习表现？" class="headerlink" title="如何让agent在Sparse Reward 中拥有更好的学习表现？"></a>如何让agent在Sparse Reward 中拥有更好的学习表现？</h2><h4 id="1-“写完作业就给糖吃”"><a href="#1-“写完作业就给糖吃”" class="headerlink" title="1.“写完作业就给糖吃”"></a>1.“写完作业就给糖吃”</h4><p>​    把关键的一些动作强制地定义为正的reward，这样agent就不会反感这一学习行为，从而一步步走到最大的reward</p><p><img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563422881839.png" alt></p><h4 id="2-“兴趣是最好的老师-”-Curiosity-Module"><a href="#2-“兴趣是最好的老师-”-Curiosity-Module" class="headerlink" title="2.“兴趣是最好的老师 ”     Curiosity Module"></a>2.“兴趣是最好的老师 ”     Curiosity Module</h4><p>​    尽管是一些风吹草动，很难让agent得到一些有用的反馈。这时让agent自己预测这个动作将来的reward，这样也能达到最终的效果。让agent预测做一个动作的未来的reward，从而使agent有兴趣的学习。</p><p>​    <img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563426283604.png" alt></p><h4 id="3-“制定学习计划”-Curriculum-Learning"><a href="#3-“制定学习计划”-Curriculum-Learning" class="headerlink" title="3.“制定学习计划”      Curriculum    Learning"></a>3.“制定学习计划”      Curriculum    Learning</h4><p>​    人来设定agent的学习顺序，使agent以从易到难的顺序学习</p><p>​    <img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563426300100.png" alt></p><p><img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563426313714.png" alt></p><h4 id="4-阶层式强化学习-Hierarchical-RL"><a href="#4-阶层式强化学习-Hierarchical-RL" class="headerlink" title="4.阶层式强化学习    Hierarchical RL"></a>4.阶层式强化学习    Hierarchical RL</h4><p>​    由上层agent提出愿景，由最下层agent来执行动作</p><p><img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563426561160.png" alt></p><p><img src="https://qwq1082.github.io/2019/07/18/SparseReward/1563426796100.png" alt></p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;p&gt;​        人们学习的过程中，常常无法及时获得回报。就像家长让小朋友写作业，小朋友可能觉得这个是负面的反馈而不去写作业（做作业让我觉得很痛苦qwq），而没有意识到以后会获得的巨大回报：写完作业后成绩提高，考上好大学，成为高富帅，从此走向巅峰赢取白富美…&lt;/p&gt;
&lt;h3
      
    
    </summary>
    
      <category term="RL" scheme="http://yoursite.com/categories/RL/"/>
    
    
  </entry>
  
  <entry>
    <title>西瓜书习题3.4(交叉验证法)</title>
    <link href="http://yoursite.com/2019/07/10/%E8%A5%BF%E7%93%9C%E4%B9%A6%E4%B9%A0%E9%A2%983-4/"/>
    <id>http://yoursite.com/2019/07/10/西瓜书习题3-4/</id>
    <published>2019-07-10T09:35:02.000Z</published>
    <updated>2019-07-10T10:37:30.808Z</updated>
    
    <content type="html"><![CDATA[<h1 id="西瓜书习题3-4-交叉验证法-："><a href="#西瓜书习题3-4-交叉验证法-：" class="headerlink" title="西瓜书习题3.4 (交叉验证法)："></a>西瓜书习题3.4 (交叉验证法)：</h1><p>​    选择两个UCI数据集，比较10折交叉验证法和留一法所估计出的对率回归的错误率.</p><h2 id="1-数据集长啥样？"><a href="#1-数据集长啥样？" class="headerlink" title="1.数据集长啥样？"></a>1.数据集长啥样？</h2><p>​    于是就下载了一组UCI数据集，它长这样：</p><p><img src="https://qwq1082.github.io/2019/07/10/%E8%A5%BF%E7%93%9C%E4%B9%A6%E4%B9%A0%E9%A2%983-4/1562751598858.png" alt="1562751598858"></p><p>至于这些数据是啥意思、UCI又是啥，咱也不知道咱也不敢问qwq~，只知道有748行、5列，在咱眼里它就是一个 (748 * 5)的矩阵。第5列数据是0和1，那它肯定是labels，属于二分类问题。</p><h2 id="2-啥是十折交叉验证法？啥是留一法？"><a href="#2-啥是十折交叉验证法？啥是留一法？" class="headerlink" title="2.啥是十折交叉验证法？啥是留一法？"></a>2.啥是十折交叉验证法？啥是留一法？</h2><h3 id="k折交叉验证法："><a href="#k折交叉验证法：" class="headerlink" title="k折交叉验证法："></a>k折交叉验证法：</h3><p>​    我们将数据集随机分成k份，使用其中 k-1 份进行训练而将另外1份用作测试。该过程可以重复 k 次，每次使用的测试数据不同。</p><p><img src="https://qwq1082.github.io/2019/07/10/%E8%A5%BF%E7%93%9C%E4%B9%A6%E4%B9%A0%E9%A2%983-4/1424198-20181001171358337-144383367.png" alt></p><p>（1）每一次迭代中留存其中一个桶。第一次迭代中留存桶1，第二次留存桶2，其余依此类推。</p><p>（2）用其他 k-1 个桶的信息训练分类器（第一次迭代中利用从桶2到桶 k 的信息训练分类器）。</p><p>（3）最终返回这 k 次测试结果的accuracy的均值</p><h4 id="十折交叉验证法就是-k-10-的情况"><a href="#十折交叉验证法就是-k-10-的情况" class="headerlink" title="十折交叉验证法就是 k=10 的情况"></a>十折交叉验证法就是 <strong>k=10</strong> 的情况</h4><h4 id="留一法则是-k-总样本数-的情况，即每次迭代从总样本取一条数据做测试集，剩余的全做训练集"><a href="#留一法则是-k-总样本数-的情况，即每次迭代从总样本取一条数据做测试集，剩余的全做训练集" class="headerlink" title="留一法则是  k=总样本数  的情况，即每次迭代从总样本取一条数据做测试集，剩余的全做训练集"></a>留一法则是  <strong>k=总样本数</strong>  的情况，即每次迭代从总样本取一条数据做测试集，剩余的全做训练集</h4><h4 id="3-以上一顿分析猛如虎，是时候该撸码实现啦-qwq"><a href="#3-以上一顿分析猛如虎，是时候该撸码实现啦-qwq" class="headerlink" title="3.以上一顿分析猛如虎，是时候该撸码实现啦  qwq"></a>3.以上一顿分析猛如虎，是时候该撸码实现啦  qwq</h4><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># sigmoid函数</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">sigmoid</span><span class="params">(z)</span>:</span></span><br><span class="line">    <span class="keyword">return</span> <span class="number">1.0</span> / (<span class="number">1.0</span> + np.exp(-z))</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># 梯度上升算法</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">grad</span><span class="params">(train_X, labels, iters=<span class="number">2000</span>)</span>:</span></span><br><span class="line">    m, n = train_X.shape</span><br><span class="line">    <span class="comment"># 步长alpha</span></span><br><span class="line">    alpha = <span class="number">0.05</span></span><br><span class="line">    <span class="comment"># 初始化权重，全设为1</span></span><br><span class="line">    weights = np.ones((n, <span class="number">1</span>))</span><br><span class="line"></span><br><span class="line">    <span class="comment"># 2000次迭代</span></span><br><span class="line">    <span class="keyword">for</span> k <span class="keyword">in</span> range(iters):</span><br><span class="line">        <span class="comment"># 沿着梯度方向，向前移动，并更新权重</span></span><br><span class="line">        P = sigmoid(train_X.dot(weights))</span><br><span class="line">        error = labels - P</span><br><span class="line">        weights += alpha * np.dot(train_X.T, error)</span><br><span class="line"></span><br><span class="line">    <span class="keyword">return</span> weights</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># predict function 返回一组预测结果，由0或1构成的 (m,1)矩阵</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">predict</span><span class="params">(test_X, weights)</span>:</span></span><br><span class="line">    m = test_X.shape[<span class="number">0</span>]</span><br><span class="line">    <span class="comment">#由sigmoid函数的性质，z = w * x , z大于0时，sigmoid(Z)&gt;0.5 即预测为1，反之预测为0 </span></span><br><span class="line">    p = np.dot(test_X, weights)</span><br><span class="line">    <span class="keyword">for</span> k <span class="keyword">in</span> range(m):</span><br><span class="line">        <span class="keyword">if</span> p[k] &gt; <span class="number">0</span>:</span><br><span class="line">            p[k] = <span class="number">1</span></span><br><span class="line">        <span class="keyword">else</span>:</span><br><span class="line">            p[k] = <span class="number">0</span></span><br><span class="line"></span><br><span class="line">    <span class="keyword">return</span> p</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># calculate accuracy 计算准确率，一列是预测结果，一列是真实结果，结果相同则计数</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">accuracy</span><span class="params">(predict_Y, Y)</span>:</span></span><br><span class="line">    m, n = Y.shape</span><br><span class="line">    Matched = <span class="number">0</span></span><br><span class="line">    <span class="keyword">for</span> k <span class="keyword">in</span> range(m):</span><br><span class="line">        <span class="keyword">if</span> predict_Y[k] == Y[k]:</span><br><span class="line">            Matched += <span class="number">1</span></span><br><span class="line">        <span class="keyword">else</span>:</span><br><span class="line">            Matched += <span class="number">0</span></span><br><span class="line">    <span class="keyword">return</span> Matched / m</span><br><span class="line"></span><br><span class="line"><span class="comment">#数据矩阵化</span></span><br><span class="line"></span><br><span class="line">df = pd.read_csv(<span class="string">'Transfusion.txt'</span>)</span><br><span class="line">df[<span class="string">'one'</span>] = <span class="number">1</span></span><br><span class="line"><span class="comment">#print(df)</span></span><br><span class="line"></span><br><span class="line">X = np.hstack((np.mat(df[<span class="string">'one'</span>]).T,np.mat(df.iloc[:,<span class="number">0</span>:<span class="number">4</span>])))</span><br><span class="line">Y = np.mat(df.iloc[:,<span class="number">4</span>]).T</span><br><span class="line"><span class="comment">#print(X,Y)</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># 留一法：有m个数据样本，k折交叉验证是把样本划分为10等份，留一法就是k=m时的场景，即每次留1个样本做测试集，剩余的全部做训练集</span></span><br><span class="line">total = X.shape[<span class="number">0</span>]</span><br><span class="line">sum = <span class="number">0</span></span><br><span class="line"><span class="keyword">for</span> k <span class="keyword">in</span> range(total):</span><br><span class="line">    test_index = k  <span class="comment"># 测试集下标</span></span><br><span class="line"></span><br><span class="line">    test_X = X[k]</span><br><span class="line">    test_Y = Y[k]</span><br><span class="line"></span><br><span class="line">    train_X = np.delete(X, test_index, axis=<span class="number">0</span>)</span><br><span class="line">    train_Y = np.delete(Y, test_index, axis=<span class="number">0</span>)</span><br><span class="line"></span><br><span class="line">    <span class="comment"># 对率回归</span></span><br><span class="line">    weights = grad(train_X, train_Y)</span><br><span class="line"></span><br><span class="line">    <span class="comment"># 统计正确率</span></span><br><span class="line">    p = predict(test_X, weights)</span><br><span class="line">    sum += accuracy(p, test_Y)</span><br><span class="line"></span><br><span class="line">print(<span class="string">'''LeaveOneOut's Accuracy: '''</span>, sum / total)</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment">#十折交叉验证,把样本分成10等分，在这10份数据中依次抽取一份做测试集，剩余9份做训练集，重复10次</span></span><br><span class="line">total = X.shape[<span class="number">0</span>]</span><br><span class="line">num_split = int(total / <span class="number">10</span>)</span><br><span class="line">sum = <span class="number">0</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> k <span class="keyword">in</span> range(<span class="number">10</span>):</span><br><span class="line"></span><br><span class="line">    <span class="comment">#选择测试集的下标</span></span><br><span class="line">    test_index = range(k * num_split , (k+<span class="number">1</span>) * num_split)</span><br><span class="line">    </span><br><span class="line">    test_X = X[test_index]</span><br><span class="line">    test_Y = Y[test_index]</span><br><span class="line">    </span><br><span class="line">    train_X = np.delete(X,test_index,axis=<span class="number">0</span>)</span><br><span class="line">    train_Y = np.delete(Y,test_index,axis=<span class="number">0</span>)</span><br><span class="line">    </span><br><span class="line">    <span class="comment">#求对率回归最优参数</span></span><br><span class="line">    weights = grad(train_X,train_Y)</span><br><span class="line">    <span class="comment">#print(weights)</span></span><br><span class="line">    <span class="comment">#统计每次组的正确率</span></span><br><span class="line">    p = predict(test_X,weights)</span><br><span class="line">    sum += accuracy(p,test_Y)</span><br><span class="line">    <span class="comment">#result += predict(test_X,weights)==test_Y ? 1:0</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#正确次数 / 验证总次数 = 准确率</span></span><br><span class="line">print(<span class="string">'''10-foldCrossValidation's Accuracy: '''</span>,sum/<span class="number">10</span>)</span><br></pre></td></tr></table></figure><p>结果是：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">LeaveOneOut&apos;s Accuracy:  0.7831325301204819</span><br><span class="line"></span><br><span class="line">10-foldCrossValidation&apos;s Accuracy:  0.7459459459459458</span><br></pre></td></tr></table></figure><p>可以发现留一法准确率高十折交叉验证法将近4个百分点，其实差别不是很大，但使用留一法的运算时间却十分漫长</p><p>其原因是十折交叉验证法只需训练10个模型，而留一法则需要训练跟样本数量一样多的模型，而本次样本数量为748，速度慢一大截</p><p>可见，在数据集比较大时，留一法并不适用。(例如100万个样本，则需要训练100万个模型，编程几分钟，训练好几天？)</p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;h1 id=&quot;西瓜书习题3-4-交叉验证法-：&quot;&gt;&lt;a href=&quot;#西瓜书习题3-4-交叉验证法-：&quot; class=&quot;headerlink&quot; title=&quot;西瓜书习题3.4 (交叉验证法)：&quot;&gt;&lt;/a&gt;西瓜书习题3.4 (交叉验证法)：&lt;/h1&gt;&lt;p&gt;​    选择两个UCI
      
    
    </summary>
    
      <category term="ML" scheme="http://yoursite.com/categories/ML/"/>
    
    
  </entry>
  
  <entry>
    <title>在github上建图床</title>
    <link href="http://yoursite.com/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/"/>
    <id>http://yoursite.com/2019/07/09/在GitHub上建图床/</id>
    <published>2019-07-09T12:12:23.000Z</published>
    <updated>2019-07-09T13:08:45.206Z</updated>
    
    <content type="html"><![CDATA[<h5 id="最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。"><a href="#最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。" class="headerlink" title="最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。"></a>最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。</h5><p>用markdown写博客少不了插入图片，看似简单的操作其实可折腾了！本地的图片要部署到服务器上必须要使用图床才能实现，网上各种推荐腾讯云、七牛云来做图床，我都尝试了一下，发现要充钱才行，果断另寻她法.（发出了穷逼的声音 qaq）</p><h3 id="现在来介绍一下如何直接在github上建图床"><a href="#现在来介绍一下如何直接在github上建图床" class="headerlink" title="现在来介绍一下如何直接在github上建图床"></a>现在来介绍一下如何直接在github上建图床</h3><p>GitHub可谓是良心大大滴！免费，没有容量限制，只是单个文件不能超过100M，有50M的文件，就会警告了。这对搭个博客用的图床来说完全足够了。不多bb，进入上手环节：</p><ol><li><p>进入repositories后new一个存储库命名为：你的github用户名.github.iogithub，点击上方的Setting并往下拉，去到GitHub Pages设置项，把Source项设定为master branch</p><p>这些都是搭博客的基本操作，已经建了这个仓库的，可以在这个仓库上上传文件。比如<br><img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A//20190709203332774.png" alt="在这里插入图片描述"></p></li></ol><ol start="2"><li><p>图片已经上传，这个时候其实是可以直接通过图片链接访问这个图片的</p><p>图片链接的格式：<a href="https://github用户名.github.io/存储库名/你的图片名" target="_blank" rel="noopener">https://github用户名.github.io/存储库名/你的图片名</a>    （加后缀名）<br><img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/20190709203404689.png" alt="在这里插入图片描述"></p></li></ol><ol start="3"><li>在markdown插入图片链接，就可以显示该图片了<img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/20190709203517666.png" alt="在这里插入图片描述"></li></ol><h3 id="以上是图床的基本原理！！！"><a href="#以上是图床的基本原理！！！" class="headerlink" title="以上是图床的基本原理！！！"></a>以上是图床的基本原理！！！</h3><p>博客的项目文件在每次部署的时候都会更新，这样不利于对上传图片的保存。要如何解决这个问题呢？</p><ol><li><p>到博客根目录下 查看_config.yml文件 查找 post_asset_folder 字段确定post_asset_folder 设置为true </p><p>post_asset_folder:true</p><p>设置 post_asset_folder  参数后，在建立文件时，Hexo 会自动建立一个与文章同名的文件夹，可以把与该文章相关的所有资源都放到此文件夹内，这样就可以更方便的使用资源。像这样<br><img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/20190709203604714.png" alt="在这里插入图片描述"></p></li><li><p>把要用的图片都放在这个文件夹下，下次 hexo d 部署博客的时候可以看到<img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/20190709203702409.png" alt="在这里插入图片描述"></p><p>要用的图片文件都在github的仓库上了，这时用前面介绍的原理，记下文件的路径名在编写markdown的时候添加图片链接即可</p><p>图片链接的格式：<a href="https://github用户名.github.io/存储库名/你的图片名" target="_blank" rel="noopener">https://github用户名.github.io/存储库名/你的图片名</a>    （加后缀名）</p><p>比如我上面的皮卡丘动图  2.gif  的链接就是：    <a href="https://qwq1082.github.io/2019/07/09/pikachu/2.gif" target="_blank" rel="noopener">https://qwq1082.github.io/2019/07/09/pikachu/2.gif</a></p><p>写好markdown之后再次部署即可<br><img src="https://qwq1082.github.io/2019/07/09/%E5%9C%A8GitHub%E4%B8%8A%E5%BB%BA%E5%9B%BE%E5%BA%8A/20190709203803699.png" alt="在这里插入图片描述"></p><p>欢迎访问呀   <a href="https://qwq1082.github.io/2019/07/09/pikachu/" target="_blank" rel="noopener">https://qwq1082.github.io/2019/07/09/pikachu/</a></p></li></ol>]]></content>
    
    <summary type="html">
    
      
      
        &lt;h5 id=&quot;最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。&quot;&gt;&lt;a href=&quot;#最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。&quot; class=&quot;headerlink&quot; title=&quot;最近用hexo整了个博客，我现在的感受就是这也太tm折腾了。&quot;&gt;
      
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>&#39;pikachu&#39;</title>
    <link href="http://yoursite.com/2019/07/09/pikachu/"/>
    <id>http://yoursite.com/2019/07/09/pikachu/</id>
    <published>2019-07-09T03:12:23.000Z</published>
    <updated>2019-07-09T09:54:44.493Z</updated>
    
    <content type="html"><![CDATA[<p><img src="https://qwq1082.github.io/2019/07/09/pikachu/2.gif" alt></p><p>在GitHub上建图床</p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;p&gt;&lt;img src=&quot;https://qwq1082.github.io/2019/07/09/pikachu/2.gif&quot; alt&gt;&lt;/p&gt;
&lt;p&gt;在GitHub上建图床&lt;/p&gt;

      
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>西瓜书习题3.3(对率回归 LR)</title>
    <link href="http://yoursite.com/2019/07/08/%E8%A5%BF%E7%93%9C%E4%B9%A6%E4%B9%A0%E9%A2%983-3/"/>
    <id>http://yoursite.com/2019/07/08/西瓜书习题3-3/</id>
    <published>2019-07-08T13:41:33.000Z</published>
    <updated>2019-07-10T10:19:09.622Z</updated>
    
    <content type="html"><![CDATA[<h1 id="西瓜书习题-3-3-对率回归-LR"><a href="#西瓜书习题-3-3-对率回归-LR" class="headerlink" title="西瓜书习题 3.3(对率回归 LR)"></a>西瓜书习题 3.3(对率回归 LR)</h1><h2 id="编程实现对率回归，并给出西瓜数据集上的结果"><a href="#编程实现对率回归，并给出西瓜数据集上的结果" class="headerlink" title="编程实现对率回归，并给出西瓜数据集上的结果"></a>编程实现对率回归，并给出西瓜数据集上的结果</h2><p>西瓜数据集如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line">ID  density  Sugar_content  label</span><br><span class="line"></span><br><span class="line">0    1    0.697         0.4600      1</span><br><span class="line">1    2    0.774         0.3760      1</span><br><span class="line">2    3    0.634         0.2640      1</span><br><span class="line">3    4    0.608         0.3180      1</span><br><span class="line">4    5    0.556         0.2150      1</span><br><span class="line">5    6    0.403         0.2370      1</span><br><span class="line">6    7    0.481         0.1490      1</span><br><span class="line">7    8    0.437         0.2110      1</span><br><span class="line">8    9    0.666         0.0910      0</span><br><span class="line">9   10    0.243         0.0267      0</span><br><span class="line">10  11    0.245         0.0570      0</span><br><span class="line">11  12    0.343         0.0990      0</span><br><span class="line">12  13    0.639         0.1610      0</span><br><span class="line">13  14    0.657         0.1980      0</span><br><span class="line">14  15    0.360         0.3700      0</span><br><span class="line">15  16    0.593         0.0420      0</span><br><span class="line">16  17    0.719         0.1030      0</span><br></pre></td></tr></table></figure><p>这题的关键就在于对对率回归的理解，附上对率回归的手写版公式推导过程：<br><img src="https://img-blog.csdnimg.cn/20190709100451704.jpg" alt="在这里插入图片描述"></p><p><img src="https://img-blog.csdnimg.cn/20190709100605503.jpg" alt="在这里插入图片描述"></p><h4 id="推导RL的过程，得到了梯度公式，接下来用梯度上升算法实现RL（还有一种是用牛顿法实现，以后有时间在补充吧qwq-）"><a href="#推导RL的过程，得到了梯度公式，接下来用梯度上升算法实现RL（还有一种是用牛顿法实现，以后有时间在补充吧qwq-）" class="headerlink" title="推导RL的过程，得到了梯度公式，接下来用梯度上升算法实现RL（还有一种是用牛顿法实现，以后有时间在补充吧qwq~）"></a>推导RL的过程，得到了梯度公式，接下来用梯度上升算法实现RL（还有一种是用牛顿法实现，以后有时间在补充吧qwq~）</h4><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> pandas <span class="keyword">as</span> pd</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#读取文件</span></span><br><span class="line">df = pd.read_csv(<span class="string">'watermelon3.0alpha.csv'</span>)</span><br><span class="line">print(df)</span><br><span class="line"><span class="comment">#方便矩阵运算，添一列1</span></span><br><span class="line">df[<span class="string">'one'</span>] = <span class="number">1.0</span></span><br><span class="line"><span class="comment">#将训练集装进矩阵</span></span><br><span class="line">train_X = np.mat(df[[<span class="string">'one'</span>,<span class="string">'density'</span>,<span class="string">'Sugar_content'</span>]])</span><br><span class="line"><span class="comment">#标签</span></span><br><span class="line">labels = np.mat(df[[<span class="string">'label'</span>]])</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#sigmoid函数</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">sigmoid</span><span class="params">(z)</span>:</span></span><br><span class="line">    <span class="keyword">return</span> <span class="number">1.0</span>/(<span class="number">1.0</span> + np.exp(-z))</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#梯度上升算法</span></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">grad</span><span class="params">(train_X,labels,iters = <span class="number">2000</span>)</span>:</span></span><br><span class="line">    m,n = train_X.shape</span><br><span class="line">    <span class="comment">#步长alpha</span></span><br><span class="line">    alpha = <span class="number">0.05</span></span><br><span class="line">    <span class="comment">#初始化权重，全设为1</span></span><br><span class="line">    weights = np.ones((n,<span class="number">1</span>))</span><br><span class="line">    </span><br><span class="line">    <span class="comment">#2000次迭代</span></span><br><span class="line">    <span class="keyword">for</span> k <span class="keyword">in</span> range(iters):</span><br><span class="line">        <span class="comment">#沿着梯度方向，向前移动，并更新权重</span></span><br><span class="line">        P = sigmoid(train_X.dot(weights))</span><br><span class="line">        error = labels - P</span><br><span class="line">        weights += alpha * np.dot(train_X.T,error)</span><br><span class="line"></span><br><span class="line">    <span class="keyword">return</span> weights</span><br><span class="line"></span><br><span class="line"><span class="comment">#求出最优回归参数</span></span><br><span class="line">weights = grad(train_X,labels)</span><br><span class="line">print(weights)</span><br></pre></td></tr></table></figure><p>求得参数如下：</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[[-3.12066518]</span><br><span class="line"> [ 0.76966008]</span><br><span class="line"> [13.22972573]]</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#绘图</span></span><br><span class="line">x1,y1 = [],[]</span><br><span class="line">x2,y2 = [],[]</span><br><span class="line">x3,y3 = [],[]</span><br><span class="line">x4,y4 = [],[]</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> k <span class="keyword">in</span> range(train_X.shape[<span class="number">0</span>]):</span><br><span class="line">    <span class="keyword">if</span> labels[k] == <span class="number">1</span>:</span><br><span class="line">        <span class="keyword">if</span> sigmoid(np.dot(train_X[k,:],weights)) &gt;= <span class="number">0.5</span> :</span><br><span class="line">            x1.append(train_X[k,<span class="number">1</span>])</span><br><span class="line">            y1.append(train_X[k,<span class="number">2</span>])</span><br><span class="line">        <span class="keyword">else</span>:</span><br><span class="line">            x2.append(train_X[k,<span class="number">1</span>])</span><br><span class="line">            y2.append(train_X[k,<span class="number">2</span>]) </span><br><span class="line">    <span class="keyword">else</span>:  </span><br><span class="line">        <span class="keyword">if</span> sigmoid(np.dot(train_X[k,:],weights)) &lt; <span class="number">0.5</span> :</span><br><span class="line">            x3.append(train_X[k,<span class="number">1</span>])</span><br><span class="line">            y3.append(train_X[k,<span class="number">2</span>])</span><br><span class="line">        <span class="keyword">else</span>:</span><br><span class="line">            x4.append(train_X[k,<span class="number">1</span>])</span><br><span class="line">            y4.append(train_X[k,<span class="number">2</span>])</span><br><span class="line">            </span><br><span class="line">plt.scatter(x1,y1,s=<span class="number">30</span>,c=<span class="string">'red'</span>)</span><br><span class="line">plt.scatter(x2,y2,s=<span class="number">30</span>,c=<span class="string">'red'</span>,marker=<span class="string">'x'</span>)</span><br><span class="line">plt.scatter(x3,y3,s=<span class="number">30</span>,c=<span class="string">'green'</span>)</span><br><span class="line">plt.scatter(x4,y4,s=<span class="number">30</span>,c=<span class="string">'green'</span>,marker=<span class="string">'x'</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment">#绘制直线    w0 + w1x1 +w2x2 = 0</span></span><br><span class="line">X = np.arange(<span class="number">0</span>,<span class="number">0.8</span>,<span class="number">0.01</span>)</span><br><span class="line">Y = -(weights[<span class="number">0</span>] + weights[<span class="number">1</span>] * X)/weights[<span class="number">2</span>]</span><br><span class="line"></span><br><span class="line"><span class="comment">#总结：绘制直线用 plot ， 绘制散点 用scatrer</span></span><br><span class="line">plt.plot(X,Y)</span><br><span class="line"></span><br><span class="line">plt.xlabel(<span class="string">'Density'</span>)</span><br><span class="line">plt.ylabel(<span class="string">'Sugar_Content'</span>)</span><br><span class="line">plt.title(<span class="string">"LogisticRegression"</span>)</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><p><img src="https://img-blog.csdnimg.cn/2019070910063746.png" alt="在这里插入图片描述"></p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;h1 id=&quot;西瓜书习题-3-3-对率回归-LR&quot;&gt;&lt;a href=&quot;#西瓜书习题-3-3-对率回归-LR&quot; class=&quot;headerlink&quot; title=&quot;西瓜书习题 3.3(对率回归 LR)&quot;&gt;&lt;/a&gt;西瓜书习题 3.3(对率回归 LR)&lt;/h1&gt;&lt;h2 id=&quot;编程实
      
    
    </summary>
    
      <category term="ML" scheme="http://yoursite.com/categories/ML/"/>
    
    
      <category term="Python" scheme="http://yoursite.com/tags/Python/"/>
    
      <category term="MachineLearning" scheme="http://yoursite.com/tags/MachineLearning/"/>
    
      <category term="LR" scheme="http://yoursite.com/tags/LR/"/>
    
  </entry>
  
  <entry>
    <title>卷积神经网络的搭建</title>
    <link href="http://yoursite.com/2019/07/02/%E5%8D%B7%E7%A7%AF%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C%E7%9A%84%E6%90%AD%E5%BB%BA/"/>
    <id>http://yoursite.com/2019/07/02/卷积神经网络的搭建/</id>
    <published>2019-07-02T04:11:44.000Z</published>
    <updated>2019-07-02T04:12:34.131Z</updated>
    
    <content type="html"><![CDATA[<p><strong>卷积神经网络的搭建</strong></p><p><strong>1 预处理训练集与测试集图片</strong></p><p>说明：因为我们从网上下载的图片各种格式都有大小也不统一，所以图片的批量预处理是很必要的。 </p><p>（1）将图片大小统一修改成100*100，可参考下面代码</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">convertjpg</span><span class="params">(jpgfile,outdir,width=<span class="number">100</span>,height=<span class="number">100</span>)</span>:</span></span><br><span class="line">    img = Image.open(<span class="string">'C:/Users/ASUS/Desktop/cat/暹罗猫/'</span>+jpgfile)</span><br><span class="line">    <span class="keyword">try</span>:</span><br><span class="line">        new_img = img.resize((width, height), Image.BILINEAR)</span><br><span class="line">        new_img.save(os.path.join(outdir, os.path.basename(jpgfile)))</span><br><span class="line">    <span class="keyword">except</span> Exception <span class="keyword">as</span> e:</span><br><span class="line">        print(e)</span><br><span class="line"><span class="keyword">for</span> jpgfile <span class="keyword">in</span> os.listdir(<span class="string">'C:/Users/ASUS/Desktop/cat/暹罗猫'</span>):</span><br><span class="line">    print(jpgfile)</span><br><span class="line">    convertjpg(jpgfile, <span class="string">"./Xianluo"</span>)</span><br></pre></td></tr></table></figure><p># 统一图片类型</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">ranamesJPG</span><span class="params">(filepath, kind)</span>:</span>     images = os.listdir(filepath)     <span class="keyword">for</span> name <span class="keyword">in</span> images:         os.rename(filepath+name, filepath+kind+<span class="string">'_'</span>+name.split(<span class="string">'.'</span>)[<span class="number">0</span>]+<span class="string">'.jpg'</span>)         print(name)         print(name.split(<span class="string">'.'</span>)[<span class="number">0</span>]) ranamesJPG(<span class="string">'C:/Users/ASUS/Desktop/cat/英国短毛猫/'</span>,<span class="string">'3'</span>)</span><br></pre></td></tr></table></figure><p>这里有必要说明为什么要修改文件名？ </p><p>其实是这样的，因为训练集和测试集的图片一共有几百张，而我们训练时，不仅要传递图片，而且还要告诉卷积神经网络每一张图片对应的                    标签，如果手工添加标签的话，可想而知，会有很大的工作量。这里注明：0_xxx代表布偶猫、1_xxx代表孟买猫、2_xxx代表暹罗猫、3_xxx代表英国短毛猫。 </p><p>2 训练模型与测试</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> os</span><br><span class="line"><span class="keyword">from</span> PIL <span class="keyword">import</span> Image</span><br><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">from</span> keras.utils <span class="keyword">import</span> np_utils</span><br><span class="line"><span class="keyword">from</span> keras.models <span class="keyword">import</span> Sequential</span><br><span class="line"><span class="keyword">from</span> keras.layers.core <span class="keyword">import</span> Dense, Dropout, Activation, Flatten</span><br><span class="line"><span class="keyword">from</span> keras.optimizers <span class="keyword">import</span> SGD, RMSprop, Adam</span><br><span class="line"><span class="keyword">from</span> keras.layers <span class="keyword">import</span> Conv2D, MaxPooling2D</span><br><span class="line"></span><br><span class="line"><span class="comment">#--------------------------------------------------------------------------------------------</span></span><br></pre></td></tr></table></figure><h1 id="将训练集图片转换成数组"><a href="#将训练集图片转换成数组" class="headerlink" title="将训练集图片转换成数组"></a>将训练集图片转换成数组</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">ima1 = os.listdir(<span class="string">'./cat/train'</span>)</span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">read_image1</span><span class="params">(filename)</span>:</span></span><br><span class="line">    img = Image.open(<span class="string">'./cat/train/'</span>+filename).convert(<span class="string">'RGB'</span>)</span><br><span class="line">    <span class="keyword">return</span> np.array(img)</span><br><span class="line"></span><br><span class="line">x_train = []</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> ima1:</span><br><span class="line">    x_train.append(read_image1(i))</span><br><span class="line"></span><br><span class="line">x_train = np.array(x_train)</span><br></pre></td></tr></table></figure><h1 id="根据文件名提取标签"><a href="#根据文件名提取标签" class="headerlink" title="根据文件名提取标签"></a>根据文件名提取标签</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">y_train = []</span><br><span class="line"><span class="keyword">for</span> filename <span class="keyword">in</span> ima1:</span><br><span class="line">    y_train.append(int(filename.split(<span class="string">'_'</span>)[<span class="number">0</span>]))</span><br><span class="line"></span><br><span class="line">y_train = np.array(y_train)</span><br></pre></td></tr></table></figure><h1 id="—————————————————————————————–"><a href="#—————————————————————————————–" class="headerlink" title="—————————————————————————————–"></a>—————————————————————————————–</h1><h1 id="将测试集图片转化成数组"><a href="#将测试集图片转化成数组" class="headerlink" title="将测试集图片转化成数组"></a>将测试集图片转化成数组</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">ima2 = os.listdir(<span class="string">'./cat/test'</span>)</span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">read_image2</span><span class="params">(filename)</span>:</span></span><br><span class="line">    img = Image.open(<span class="string">'./cat/test/'</span>+filename).convert(<span class="string">'RGB'</span>)</span><br><span class="line">    <span class="keyword">return</span> np.array(img)</span><br><span class="line"></span><br><span class="line">x_test = []</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> ima2:</span><br><span class="line">    x_test.append(read_image2(i))</span><br><span class="line"></span><br><span class="line">x_test = np.array(x_test)</span><br></pre></td></tr></table></figure><h1 id="根据文件名提取标签-1"><a href="#根据文件名提取标签-1" class="headerlink" title="根据文件名提取标签"></a>根据文件名提取标签</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">y_test = []</span><br><span class="line"><span class="keyword">for</span> filename <span class="keyword">in</span> ima2:</span><br><span class="line">    y_test.append(int(filename.split(<span class="string">'_'</span>)[<span class="number">0</span>]))</span><br><span class="line"></span><br><span class="line">y_test = np.array(y_test)</span><br><span class="line"><span class="comment">#-------------------------------------------------------------------------------------</span></span><br></pre></td></tr></table></figure><h1 id="将标签转换格式"><a href="#将标签转换格式" class="headerlink" title="将标签转换格式"></a>将标签转换格式</h1><p>y_train = np_utils.to_categorical(y_train)<br>y_test = np_utils.to_categorical(y_test)</p><h1 id="将特征点从0255转换成01提高特征提取精度"><a href="#将特征点从0255转换成01提高特征提取精度" class="headerlink" title="将特征点从0255转换成01提高特征提取精度"></a>将特征点从0<del>255转换成0</del>1提高特征提取精度</h1><p>x_train = x_train.astype(‘float32’)<br>x_test = x_test.astype(‘float32’)<br>x_train /= 255<br>x_test /= 255</p><h1 id="搭建卷积神经网络"><a href="#搭建卷积神经网络" class="headerlink" title="搭建卷积神经网络"></a>搭建卷积神经网络</h1><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br></pre></td><td class="code"><pre><span class="line">model = Sequential()</span><br><span class="line">model.add(Conv2D(<span class="number">32</span>, (<span class="number">3</span>, <span class="number">3</span>), activation=<span class="string">'relu'</span>, input_shape=(<span class="number">100</span>, <span class="number">100</span>, <span class="number">3</span>)))</span><br><span class="line">model.add(Conv2D(<span class="number">32</span>, (<span class="number">3</span>, <span class="number">3</span>), activation=<span class="string">'relu'</span>))</span><br><span class="line">model.add(MaxPooling2D(pool_size=(<span class="number">2</span>, <span class="number">2</span>)))</span><br><span class="line">model.add(Dropout(<span class="number">0.25</span>))</span><br><span class="line"></span><br><span class="line">model.add(Conv2D(<span class="number">64</span>, (<span class="number">3</span>, <span class="number">3</span>), activation=<span class="string">'relu'</span>))</span><br><span class="line">model.add(Conv2D(<span class="number">64</span>, (<span class="number">3</span>, <span class="number">3</span>), activation=<span class="string">'relu'</span>))</span><br><span class="line">model.add(MaxPooling2D(pool_size=(<span class="number">2</span>, <span class="number">2</span>)))</span><br><span class="line">model.add(Dropout(<span class="number">0.25</span>))</span><br><span class="line"></span><br><span class="line">model.add(Flatten())</span><br><span class="line">model.add(Dense(<span class="number">256</span>, activation=<span class="string">'relu'</span>))</span><br><span class="line">model.add(Dropout(<span class="number">0.5</span>))</span><br><span class="line">model.add(Dense(<span class="number">4</span>, activation=<span class="string">'softmax'</span>))</span><br><span class="line"></span><br><span class="line">sgd = SGD(lr=<span class="number">0.01</span>, decay=<span class="number">1e-6</span>, momentum=<span class="number">0.9</span>, nesterov=<span class="literal">True</span>)</span><br><span class="line">model.compile(loss=<span class="string">'categorical_crossentropy'</span>, optimizer=sgd, metrics=[<span class="string">'accuracy'</span>])</span><br><span class="line"></span><br><span class="line">model.fit(x_train, y_train, batch_size=<span class="number">10</span>, epochs=<span class="number">32</span>)</span><br><span class="line">model.save_weights(<span class="string">'./cat/cat_weights.h5'</span>, overwrite=<span class="literal">True</span>)</span><br><span class="line"></span><br><span class="line">score = model.evaluate(x_test, y_test, batch_size=<span class="number">10</span>)</span><br><span class="line">print(score)</span><br></pre></td></tr></table></figure><p>注：需要安装h5py，用于保存和加载xx.h5类型的权重文件</p><p>直接右键run运行程序生成权重文件，用于以后的预测。</p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;p&gt;&lt;strong&gt;卷积神经网络的搭建&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1 预处理训练集与测试集图片&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;说明：因为我们从网上下载的图片各种格式都有大小也不统一，所以图片的批量预处理是很必要的。 &lt;/p&gt;
&lt;p&gt;（1）将图片大小统一修
      
    
    </summary>
    
    
  </entry>
  
</feed>
