Friday, 23 June 2017

Call a dll from Python

Letting VS2015 make a dll called SomeDLL for me with these implementations

// SomeDll.cpp : 
// Defines the exported functions for the DLL application.
//

#include "stdafx.h"
#include "SomeDll.h"


// This is an example of an exported variable
SOMEDLL_API int nSomeDll=0;

// This is an example of an exported function.
SOMEDLL_API int fnSomeDll(void)
{
    return 42;
}

// This is the constructor of a class that has been exported.
// see SomeDll.h for the class definition
CSomeDll::CSomeDll()
{
    return;
}
 

I then make a python script, using ctypes and loading the dll, using os to find it:

import os
import ctypes

os.chdir("C:\\Users\\sbkg525\\src\\SomeDll\\Debug")
SomeDll = ctypes.WinDLL("SomeDll.dll")


I can either use attributes of the library or use protoypes. I tried protoypes first.The function returns an int and takes no parameters:

proto = ctypes.WINFUNCTYPE(ctypes.c_int)
params = ()

answer = proto(("fnSomeDll", SomeDll), params)


Unfortunately this says

AttributeError: function 'fnSomeDll' not found

because C++ is name mangled. extern "C" FTW another time; for now

link.exe /dump /exports Debug\SomeDll.dll

          1    0 0001114F ??0CSomeDll@@QAE@XZ =

                  @ILT+330(??0CSomeDll@@QAE@XZ)
          2    1 00011244 ??4CSomeDll@@QAEAAV0@$$QAV0@@Z =

                  @ILT+575(??4CSomeDll@@QAEAAV0@$$QAV0@@Z)
          3    2 0001100A ??4CSomeDll@@QAEAAV0@ABV0@@Z =

                  @ILT+5(??4CSomeDll@@QAEAAV0@ABV0@@Z)
          4    3 0001111D ?fnSomeDll@@YAHXZ =

                  @ILT+280(?fnSomeDll@@YAHXZ)
          5    4 00018138 ?nSomeDll@@3HA =

                  ?nSomeDll@@3HA (int nSomeDll)

Looks like we want 4;

answer = proto(("?fnSomeDll@@YAHXZ", SomeDll), params)

print answer

>>> <WinFunctionType object at 0x0248A7B0>


Of course. It's a function. Let's call the function

print answer()
>>> 42


Done. I'll try attributes next. And functions which take parameters. Later, I'll look at other calling conventions.

First, the docs says, "foreign functions can be accessed as attributes of loaded shared libraries.
Given

extern "C"
{
    SOMEDLL_API void hello_world()
    {
        //now way of telling this has worked
    }

}

we call it like this

lib = ctypes.cdll.LoadLibrary('SomeDll.dll')
lib.hello_world()


It seems to assume a return type of int. For example,

extern "C"
{
    SOMEDLL_API double some_number()
    {
        return 1.01;
    }

}

called as follows

lib = ctypes.cdll.LoadLibrary('SomeDll.dll')
lib.hello_world()
val = lib.some_number()
print val, type(val)

Gives

-858993460, <type 'int'>


We need to specify the return type, since it's not an int:



lib.some_number.restype = ctypes.c_double
val = lib.some_number()
print val, type(val)


then we get what we want

1.01 <type 'float'>

We need to do likewise for parameters

extern "C"
{
    SOMEDLL_API double add_numbers(double x, double y)
    {
        return x + y;
    }
}


If we just try to call it with some floats we get an error

    total = lib.add_numbers(10.5, 25.7)
ctypes.ArgumentError: argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1


Stating the parameter type fixes this

lib.add_numbers.restype = ctypes.c_double
lib.add_numbers.argtypes = [ctypes.c_double, ctypes.c_double]
total = lib.add_numbers(10.5, 25.7)
print total, type(total)


36.2 <type 'float'>

Just one starter thought on strings. Returning a const char * seems to give a str in python

SOMEDLL_API const char * speak()
{
    return "Hello";


called like this



lib.speak.restype = ctypes.c_char_p
print lib.speak()


says "Hello"; using parameters as strings and particularly as refrences that can be changed needs some investigation.

Does a similar approach using attributes of the loaded library work for non-extern "C" functions? Can I use the proto approach on the extern "C" functions?

Let's see...







Codes of Conduct

There's been yet another thread about code of conducts at conferences on twitter, and I wanted to compare them to conditions of sale for gig or festival tickets. Here's an example:
Let's consider a sample:
1. I can't turn up before it starts? Why even say that? Oh, and 2. I have to leave when it's over. THIS IS PC GONE MAD. OK, not PC. But why do you need to spell this out? It's like putting a precondition of a string has to terminate with a null for you to call strlen.
3. Yeah, rulez. Whatever. Surprised you didn't say it's illegal to break the law. What is this?
Blahblahblah.
9. Yeah, fair enough. But what about flame throwers?
No, no, no...
16. Does that mean I can bring *Real* weapons?
Blahblahblah.

25. There might be swear words?! Why tell me that?

Or maybe some people will bring kids with them and that might need a sane conversation about a time and a place for certain behaviours. I know the bands, I know what kind of music to expect. I know what I'm walking in to.

I also know there's a clearly marked welfare tent on site just in case.

And wardens on the campsite wearing obvious vests.

And hundreds of programmers listening to their favourite rock stars.
(OK, not all the punters are programmers, but many are).

Why did I ever go to a field full of metallers? (Knowing I'd probably be one of the few women there). I went with friends, which made me feel safer. I wanted to listen to bands I knew, and discover new music. If I'd gone by myself, knowing in advance about wardens and the welfare tent would have made me feel OK. Hey, the conditions of sale show the organisers have thought about things that might go wrong or concern people, and that makes me feel safer.

Do they make people think, "This is PC gone mad? If you need to tell me not to set fire to myself and drink myself to death then you are insulting me and I want nothing to do with this?"

Not by the looks of the number of people who turn up. And there have been more women and kids recently which is great.

What are codes of conduct for? Quite frankly me. But not just me. They are not there to tell you how to behave because the organisers think white guys don't know how to conduct themselves, and that all such guys are potential rapists or murders. In some ways, the logic of stating a CoC is pointless because we know how to be nice taken to extreme could mean laws are pointless. Why say "You aren't allowed to commit murder?" Surely that doesn't need pointing out? And yet most countries have such a law.

Conditions of sale and code of conducts aren't laws, but they give a shared statement of expectation and make me feel OK about going to conferences/talks/festivals/gigs alone. And then I meet loads of new people and discover new music. And I can't wait til Bloodstock. Or the next tech talk/conference I go to.





Friday, 21 October 2016

Elastic stack - RTFM

I tried to setup ELK (well, just elasticsearch and kibana initially), with a view to monitoring a network.

Having tried to read the documentation for an older version than I'd downloaded and furthermore one for *Nix when I'm using Windows, I eventually restarted at the "Learn" pages on https://www.elastic.co/

There are a lot of links in there, and it's easy to get lost, but it is very well written.

This is my executive summary of what I think I did.

First, download the zip of kibana and elasticsearch.

From the bin directory for elasticsearch, run elasticsearch.bat file, or run service install then service run. If you run the batch file it will spew logs to the console, as well as a log file (in the logs folder). You can tail the file if you choose to run it as a service. Either works.

If you then open http://localhost:9200/ in a suitable browser you should see something like this:

{
  "name" : "Barbarus",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "bE-p5dLXQ_69o0FWQqsObw",
  "version" : {
    "number" : "2.4.1",
    "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16",
    "build_timestamp" : "2016-09-27T18:57:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.2"
  },
  "tagline" : "You Know, for Search"
}
 
The name is a randomly assigned Marvel character. You can configure all of this, but don't need to just to get something up and running to explore. kibana will expect elasticsearch to be on port 9200, but again that is configurable. I am getting ahead of myself though.

Second, unzip kibana, and run the batch file kibana.bat in the bin directory. This will witter to itself. This starts a webserver, on port 5601 (again configurable, but this by default): so open http://localhost:5601 in your browser.

kibana wants an "index" (way to find data), so we need to get some into elasticsearch: the first page will say "Configure an index pattern". This blog has a good walk through of kibana (so do the official docs).

All of the official docs tell you to use curl to add (or CRUD) data in elasticsearch, for example
curl -XPUT 'localhost:9200/customer/external/1?pretty' -d '
{
  "name": "John Doe"
}'

NEVER try that from a Windows prompt, even if you have a curl library installed. You need to escape out the quote, and even then I had trouble. You can put the data (-d part) in a file instead and use @, but it's not worth it.
Python to the rescue. And Requests:HTTP for Humans
pip install requests
to the rescue.
Now I can run the instructions in Python instead of shouting at a cmd prompt.

import requests
r = requests.get('http://localhost:9200/_cat/health?v')

r.text


Simple. The text shows me the response. There is a status code property too. And other gooides. The the manual. For this simple get command you could just point your browser at localhost:9200/_cat/health?v


Don't worry if the status is yellow - this just means you only have omne node so it can't replicate in cause of disaster.

Notice the transport, http:// at the start. If you forget this, you'll get an error like
>>> r = requests.put('localhost:9200/customer/external/1?pretty', json={"name": "John Doe"})
...
    raise InvalidSchema("No connection adapters were found for '%s'" % url) requests.exceptions.InvalidSchema: No connection adapters were found for 'localhost:9200/customer/external/1?pretty'



Now we can put in some data.

First make an index (elastic might add this if you try to put data under a non-existent index). We will then be able to point kibana at that index - I mentioned kibana wanted an index earlier.
r = requests.put('http://localhost:9200/customer?pretty')


Right, now we want some data.
>>> payload = {'name': 'John Doe'}
>>> r = requests.post('http://localhost:9200/customer/external/1?pretty', json=payload)


If you point your browser at localhost:9200/customer/external/1?pretty you (should) then see the data you created. We gave it an id of 1, but it will be automatically assigned a unique id if we left that off.

We can use requests.delete to delete, and requests.post to update:
 >>> r = requests.post('http://localhost:9200/customer/external/1/_update', \
 json={ "doc" : {"name" : "Jane Doe"}})

Now, this small record set won't be much use to us. The docs have a link to some json data. I downloaded some ficticious account data. SO to the rescue for uploading the file:


>>> with open('accounts.json', 'rb') as payload:
...   headers = {'content-type': 'application/x-www-form-urlencoded'}
...   r = requests.post('http://localhost:9200/bank/account/_bulk?pretty', \ 

              data=payload,  verify=False, headers=headers)
...

>>> r = requests.get('http://localhost:9200/bank/_search?q=*&pretty')
>>> r.json()
This is equivalent to using

>>> r = requests.post('http://localhost:9200/bank/_search?pretty', \

      json={"query" : {"match_all": {}}})
 i.e. instead of q=* in the uri we have put it in the rest body.



Either way, you now have some data which you can point kibana at. In kibana, the discover tab allows you to view the data by clicking through fields. The visualise tab allows you to set up graphs. What wasn't immeditely apparent was once you have selected your buckets, fields and so forth, you need to press the green "play" button by the "options" to make it render your visualisation. And finally, I got a pie chart of the data.  I now need to point it at some real data.

 
 

 

Wednesday, 30 March 2016

Pipeline 2016

A write up of my notes: they may or may not make any sense.

Keynote: Jez Humble "What I Learned From Three Years Of Sciencing The Cr*p Out Of Continuous Delivery" or "All about SCIENCE"

Suverys

Surveys are measures looking for latent constructs for feelings and similar - see psychometrics.
Surveys need a hypothesis to test and should be worded carefully.
Consider discriminant and convergent validity.
Test for false positives.

Consider the Westrum toypology.
With 6 axes (rows) scaled across three columns: pathological, bureaucratic, generative you can start spotting connections.

Pathological
Bureaucratic
Generative
Power Oriented
Rule Oriented
Performance Oriented
Low cooperation
Modest cooperation
High cooperation
Messengers shot
Messengers neglected
Messengers trained
Responsibilities shirked
Narrow responsibilities
Risks are shared
Bridging discouraged
Bridging tolerated
Bridging encouraged
Failure leads to scapegoating
Failure leads to justice
Failure leads to inquiry
Novelty crushed
Novelty leads to problems
Novelty implemented

For example "Failure leads to" has three different options: scapegoating, justice or inquiry. Where does your org come out for each question? If they say "It's all Matt's fault" and sack Matt that won't avoid mistakes happening again. Blameless postmortems are important.
IT and aviation are both high-tempo, high consequence environments. They are adaptive complex systems: there is frequently not enough information to make a decision. Therefore reduce the consequences of things going wrong.
In general for surveys, use a Likert type scale - use clearly worded statements on a scale, allowing numerical analysis. See if your questions "load together" (or bucket). Maybe spotting what's gone wrong with some software buckets into notification from outside (customers etc) and notification from inside (alerts etc).
Consider CMV, CMB - common method variance or bias. Look for early versus late respondents.
See https://puppetlabs.com/2015-devops-report for the previous devops survey.
In fact take this year's https://puppetlabs.com/blog/2016-state-devops-survey-here

IT performance

How do you measure it? How do you predict it? It seems that "I am satisfied with my job" is the biggest predictor of organisational performance.
Does your company have a culture of "autonomy, mastery, purpose"? What motivates us? [See Pink]

How do we measure IT performance? Consider lead time, release frequency, time to restore, change failure rate...
Going faster doesn't mean you break things, it actually makes you *more* stable, if you look at the data [citation needed]
"Bi-modal IT" is wrong: watch out for Jez's upcoming blog about "fast doesn't compromise safety"

Do we still want to work in the dark-ages of manual config and no test automation?

We claim we are doing continuous integration (CI) by redefining CI. Do devs merge to trunk daily? Do you have tests? Do you fix the build if it goes red?

Aside: "Surveys are a powerful source of confirmation bias"

Question: Can we work together when things go wrong?

Do you have peer reviewed changes? (Mind you, change advisory boards)

Science again (well, stats)

SEM: structured equation modelling: use this to avoid spurious correlations.

Apparently 25% of people do TDD - it's the lost XP practice. TDD forces you to write code in testable ways: it's not about the tests.

How good are your tests? Consider mutation testing e.g. Ivan Moore's Jester

Change advisory boards don't work. They obviously impact throughput but have negligible impact on stability. Jez suggested the phrase "Risk management theatre".


Ian Watson and Chris Covell "Steps closer to awesome"

They work at Call Credit (used to be part of the Skipton building soc) and talked about how to change an organisation.

Their hypothesis: "You already have the people you need."
"Metal as a service" sneaked a mention, since some people were playing buzz-word bingo.
Question: what would make this org "nirvana"?
They started broadcasting good (and bad) things to change the culture. e.g. moving away from a fear of failure. Having shared objectives helped.

We are people, not resources. "Matrix management" (queue obvious slides)  - not a good thing. Be the "A" team instead. (Or the goonies).

The environment matters. They suggested blowing up a red balloon each time you are interrupted for 15 seconds or more, giving a visual aid of the distractions.

They mentioned "Death to manual deployments" being worth reading.

They said devs should never have access to prod.
You need centres of excellence: peer pressure helps.
They have new bottlenecks: "two speed IT" .... the security team should be enablers not the police.
They mentioned the "improvement kata"
They said you need your ducks in a straight line == a backlog of good stories.

Gary Frost "Financial Institutions Carry Too Much Risk, It’s Time To Embrace Continuous Delivery"

of 51zero.com
Sarbanes-Oxley (SOx) was introduced because of risk in finance. Has it worked? No.
It brought about a segregation of duties and lots of change control review. "runbooks" This is still high risk. There have been lots of breeches from IT departments e.g. Knight Capital, NatWest (3 times).
Why are we still failing, despite these "safety measures"?
We need fully automated testing including security and performance. We need micro-services (and containers), giving us isolation.
Aside; architecture diagrams...! Are they helpful? Are they even correct? Why not automatically generate these too so they are at least correct?

What are the blockers? Silos. Move to collaborative environments.

Look out for new FinTech disruption (start-ups I presume)

Gustavo Elias "How To Deal With A Hot Potato"

He was landed with legacy code that was deeply flawed, had multiple responsibilities and high maintenance costs. In fact he calculated these costs and told management, For example, with downtime for deployment and 40 minutes to restarted calculate the cost at over £500 per day per dev.
How to change this?
  • Re-architect
  • Reach zero downtime
  • Detach from the old release cycle
How?
Re-architect with micro-services and the strangle-vine pattern.
Reach zero downtime with a canary release and blue/green deployment. You need business onside for the extra hardware.
Old release cycle: bamboo plan - but this needs new machines.
In the end, be proud.

Pete Marshall "Achieving Continuous Delivery In A Legacy Environment"

The tech architect at Planday (a shift work app)
C.D. in a legacy environment: and not "chaotic delivery".
Ask the question: "What are you business goals?"
They had DNS load balancing, "interesting stand-ups" (nobody cared), no monitoring.
He started a tech radar: goals to get people on board.
He used a corp screensaver to communicate the pipeline vision.
How easy is your code to build? Do you know what's actually in prod? Can you find the delta?
He changed nant to msbuild.
He became a test mentor, having half hour sessions to increase test coverage.
They had estimation sessions and planning sessions.
Teams started to release on their own schedule with minimal disruption to others. 
Logging, monitoring and alerting helped: look for patterns in the logs. n.b. loggly (though cloud based with no instance in Europe so might be slow)
He mentioned feature toggles (I wondered how he implemented these: please not boolean flags in a database, but enough of my pain), though watch out - you can still get surprises.
He used the strangle pattern.
Don't do loads of things: do a couple of things you can actually measure.
Ask yourself "What's the risk of failure?"

Sally Goble "What do you do if you don't do testing?"

From QA at The Guardian
They previously has a two-week release cycle, with a staging environment and lots of manual testing.
They deployed at 8am on a Wednesday. A big news day delayed the release cycle by a week. 
They couldn't roll back.
They moved to automated tests - perhaps selenium. They were mainly comparing pixels.
Then they threw them out.
So, what does QA do if it doesn't do testing? They now make sure they are "not wrong long." i.e. they can fix things quickly.
They have feature switching, canary releases and monitoring (but avoid noise).
They are not a testing department but a quality department. They can concentrate on other things - like less data so apps don't blow out users' data plans or similar.

Steve Elliott "Measure everything, not just production"

Laterooms: something about badgers.
Tools: log aggregation: elastic stack. Metrics: kibana, grafana. Alerting: icinga(2) [like nagios only prettier]
Previously dev/test was slow, had no investment. They had flaky tests and it was difficult to spot trends.
They moved to instrumentation and tooling in dev.
"Measure ALL the things"
Be aware that dashboard fatigue is a thing.
He pointed us at github
Have lots of metrics but don't used them to be Orwellian. Have data-driven retrospectives. (I once made a graph of who was asking who for code review to reveal cliques in our team - data makes a difference! And pictures more so.) He mentioned that you need to make space for feelings in the retrospectives too.
He suggested mixing up the format to keep retrospectives fresh: consider using http://plans-for-retrospectives.com/index.html

He said he was running sentiment analysis on the tweets he got during his talk. 

He mentioned that Devops Manchester is always looking for speakers.

Summary

I'm so glad I went. It's useful to see people talking about their successes (and failures) and to reflect on common themes. "People not resources" struck a deep note for me. I am always inspired when I see people trying to make things better, no matter how hard.
I loved the brief mention of stats in the keynote. The main themes were, of course, about measuring and automating. I will spend time thinking about what else I can measure and how to do stats and present them to non-statisticians in a clear way.
Never under-estimate the power of saying "Prove it" when someone makes a claim.




Saturday, 12 March 2016

Random Magic

Have you ever written a unit test with magic numbers in and felt bad? For example, given a C++ class that simulates stock prices, Simulation, you would expect a starting price of zero to stay at zero. Let’s write a test for this using Catch 

TEST_CASE("simulation starting at 0 remains at 0", "[Property]")
{
    const double start_price = 0.0;
    const double drift       = 0.3;//or whatever
    const double volatility  = 0.2;//or whatever
    const double dt          = 0.1;//or whatever
    const unsigned int seed  = 1;  //or whatever
    Simulation price(start_price, drift, volatility, dt, seed);
    REQUIRE(price.update() == 0.0);
}

Oh dear; magic numbers. That sinking feeling when you don’t know or care what values some variables take. The comments hint at the unhappiness. You could write a few more tests cases with other numbers, or use a parameterised approach. Trying every possible double or int would be extreme, and make the unit tests slow. Unit tests should be fast, so we’d best not. We could try some random variables instead of the magic numbers. This might lead to cases that sometimes fail, and unit tests should provide repeatable results, so we’d best not.

Oh dear. If only we had some random magic to help. We need something that allows us to test that properties hold for a variety of cases. We don’t want to hand roll lots of ad-hoc test cases ourselves. If we generate random test cases we need the results to be clearly reported so we know what went wrong if something fails. We need property-based testing. Good news! Haskell got there long before us. 

QuickCheck “is a tool for testing Haskell programs automatically. The programmer provides a specification of the program, in the form of properties which functions should satisfy, and QuickCheck then tests that the properties hold in a large number of randomly generated cases.” [See the manual] You define a property, such as reversing a reversed list gives the original list

prop_RevRev xs = reverse (reverse xs) == xs
          where types = xs::[Int]

Then quickly check it holds for some randomly generated examples.


        Main> quickCheck prop_RevRev
        OK, passed 100 tests.

If a property doesn’t hold, quickCheck reports the case or “counter-example” for which it does not hold. Instead of my initial “example-based” test I can now test my property holds generally. Since the cases are randomly generated rather than exhaustive I may still miss problems, but look how much shorter the code was.

Wait a moment! I was trying to test some C++ and got distracted by Haskell. The good news is ports of QuickCheck exist for various languages. For example, F# has FSCheck  Python has Hypothesis  and, C++ being C++, has various versions. I have tried Legiasoft’s QuickCheck and showed my initial attempts at the #ACCU2015 conference.

A recent blog from Spotify drew my attention to RapidCheck. This claims to integrate with Boost test and Google Test/Mock though I haven't tried it yet. I wonder if I can make it play nicely with Catch. I will report back. Another interesting feature it supports is stateful based testing, based on Erlang’s port of QuickCheck. Since this started with Haskell, many frameworks need *pure* functions. Once in a while, some of us are not quite as pure as we'd like, so I can imagine this being very useful.

I hope this has sparked some excitement about new ways of testing your code. Next time someone asks “Unit tests or integration tests?” say “Yes, and also property-based tests”.

Sunday, 27 September 2015

In vivo, in vitro, in silico


In vivo, in vitro, in silico


Some people get unit testing and some people don't. The reasons vary, usually based on a mixture of previous experience, lack of experience, fear of the unknown or joy at a safer quicker way of developing. One specific doubt crops up from time to time. It comes in the form of "If I test small bits, i.e. units, whatever *that* means, it proves nothing. I need to test the whole thing or small parts of the whole thing live."

My PhD was in toxicity prediction, which involves testing if something will be toxic or not. You can test a chemical "in vivo" - administer it to a several animals in varying doses. You sit back and wait til half of them die, or show toxicity symptoms and record the doses. This gives you the Lx50 - for example the LD50 is the lethal dose that kills 50% of the animals.  Notice I said you can do this. You can also test the chemical on a set of cells in a test tube or petri dish - "in vitro" (in glass). Again you can find the dose which affects 50% of the specimens. I personally find this less upsetting, but I want to focus on parallels with testing code here. Finally, given all this data the previous tests have generated, you can analyse the data, probably on a computer, perhaps finding chemical structure to activity relationships - SAR, or quantitative SARS i.e. QSARs. These are referred to as "in silico" - for obvious reasons. Some in silico experiments will just find clusters of similar chemical, which can either alert you to groups that might need more detailed toxicity testing, or even guide drug discovery by steering clear of molecules, say containing benzene rings which can be carcinogenic, saving time and money if you are trying to invent a drug that cures cancer. The value of testing on a computer outside a live organism should be clear. It can save time, money and even lives.


If we keep this in mind while considering testing a software system, rather than a biological system, we should be able to see some parallels. It is possible to test a live system - maybe on beta rather than "TIP" (Test in production). This can be a good thing. However, it might save time and money, and though maybe not lives, certainly headaches, to test parts of the live system in a sandboxed environment, analogous to in vitro. Running an end to end test against a test database instance with data in a specific state might count. Pushing the analogy further, you could even test small parts of the system, say units, whatever they are, in silico. Just try this small part away from the live system in a computer. This is worthwhile. It will be quicker, as toxicity in silico experiments are quicker - they tend to take hours rather than days. This is a good thing. Of course, you won't know exactly what will happen in a full live system, but you can catch problems earlier, before killing something. This is a Good Thing.

Other industries also test things in units - I could put together a car or a computer hit the on switch and see if it works.  However, I am given to believe that the components are tested thoroughly *before* the full system is built. If I build a PC and it doesn't work I will then have to go through one part at a time and check. If someone tests the parts first, this will ensure I haven't put a dodgy power block in the whole thing. Testing small parts, preferably before testing the whole system, is a Good Thing.

I don't believe this short observation will change anyone's minds. But I hope it will give pause for thought to those who think only testing from end to end matters, and testing "in silico" is a waste of time.










Wednesday, 26 August 2015

Language lawyers - or why words can have precise meaning

I was called a language lawyer the other day, because I attempted to be precise about the state of play with some code. Initially I was taken aback, but eventually concluded that the phrase "language lawyer" was not being used precisely. It was used in the sense of, "Saying exactly what you mean." If I had clarified this the self-reference may have meant I got lost down a rabbit hole, so I left it.

The situation came about because a co-worker is changing some code in a repo which has a few unit tests, but due to circumstance I won't bore you with the code is in two repos - one has the tests and the other doesn't. I have been tasks with getting tests round any code changes he makes. I am therefore working in the repo with the tests. He, of course, has decided to work in the repo without tests so doesn't know if his code changes break any existing tests.

/head-desk

It's like pair-programming but we have to talk in words rather than code.

I cannot manage to guess what his code changes might do to the tests. This would be so much easier if he ran the tests as he changed the code. In fact, by definiton, refactoring should involve running the tests as you go. Trying to ask questions like "Have you deleted the isValid function or changed its behaviour?" in order to try to get the tests to match his changes have resulted in answers like "No, well a bit, but I haven't decided yet."

My attempted to print off the test names so we could discuss how the code actually behaved before the changes have been met with ,"I haven't looked at the tests yet - I'd need to look at the code to see what they test." I think the tests have really clear names - like FooWithDefaultDateIsNotValid, He could look at the test code but I was rather hoping this was clear enough. I tried asking what new test *names* we might need, but got no-where. He did suggest I check the private container didn't contain any default dates - and offered to add a getter so I could verify this from outside the object in test code. I muttered something about encapsulation and seppuku and encapsulation.

I'm not sure if this is happening because people are used to function names making no sense and figuring out  one line at a time in a debugger, or if some people genuinely don't think in words. It's very difficult to communicate if people assume you aren't saying what you mean, realise you are and then call you out for trying to be clear.