How to restart transaction from the beginning in emv transaction - payment

I want to know that is there any specific command available by using which I can send the terminal a command and the terminal will start to communicate with the android device from the beginning of the transaction. Suppose, terminal is sending me the following APDU request when it first communicate with the android payment app: 00A404000E325041592E5359532E444446303100 . Then if I want to ask the terminal to send again this apdu request then what Should I send to the terminal in response to this apdu request. Like I want the following scenario:
terminal send apdu req to device : 00A404000E325041592E5359532E444446303100
device send some command to terminal : XX
then the terminal sending apdu req to the device : 00A404000E325041592E5359532E444446303100
what will be the value of XX ?

You can not do this my friend. There are some predefined steps for an EMV transactions defined by EMV. So to achieve the worldwide interoperability, the card and terminal both must behave in same manner as defined by EMV.
Here your wish is to indicate the terminal to resend the command that you want. Since terminal's flow is as per EMV it can't be done.
So if you want to restart your transaction then you have to physically remove the card from terminal. After power off only, terminal will again follow the same transaction flow. You can not alter the terminal flow by any how. Flow is following :
Application Selection
Initiate Processing Option
Read Application Data
Processing Restriction
Card Holder Verification
Offline Data Authentication
Terminal Risk Management
Terminal Action Analysis
Card Action Analysis 1
Online processing
Card Action Analysis 2
Completion
Issuer Scrip Processing

Related

Command (.4gl) executed with SSH.NET SshClient.RunCommand fails with “No such file or directory”

I have a web service that uses SSH.NET to call a shell script on a Unix box.
If I run the script normally, it works fine, does its work correctly on the Informix DB.
Just some background:
I call a script that executes a .4gl (cant show this as its business knowledge).
The g4l is giving the following error back in a log, when I execute it with SSH.NET:
fglgo: error while loading shared libraries: libiffgisql.so: cannot open shared object file: No such file or directory
file_load ended: 2017-09-21 15:37:01
C# code to execute SSH.NET script
sshclients = new SshClient(p, 22, username, password);
sshclients.Connect();
sshclients.KeepAliveInterval = new TimeSpan(0, 0, 1);
sshclients.RunCommand("sh " + Script_dir);
I added the KeepAliveInterval, to see, if it helps.
My question is the error I am getting from Unix/4gl.
Why is this happening and who can I get the script to execute correctly?
The SshClient.RunCommand uses SSH "exec" channel internally. It by default (rightfully) does not allocate a pseudo terminal (PTY) for the session. As a consequence a different set of startup scripts is (might be) sourced. And/or different branches in the scripts are taken, based on absence/presence of the TERM environment variable. So the environment might differ from the interactive session, you use with your SSH client.
So, in your case, the PATH is probably set differently; and consequently the air executable cannot be found.
To verify that this is the root cause, disable the pseudo terminal allocation in your SSH client. For example in PuTTY, it's Connection > SSH > TTY > Don't allocate a pseudo terminal. Then, go to Connection > SSH > Remote command and enter your air ... command. Check Session > Close window on exit > Never and open the session. You should get the same "air not found" error.
Ways to fix this, in preference order:
Fix the scripts not to rely on a specific environment.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
sshclients.RunCommand("PATH=\"$PATH;/path/to/g4l\" && sh ...");
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel.
Though SSH.NET does not support this. You would have to modify its code issue SendPseudoTerminalRequest request in .RunCommand implementation (I didn't test this).
You can also try to use "shell" channel using .CreateShell method. For it, SSH.NET does support pseudo terminal allocation.
Though, using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
For a similar issues, see
Renci SSH.NET - no result string returned for opmnctl
Certain Unix commands fail with "... not found", when executed through Java using JSch
Commands executed using JSch behaves differently than in SSH terminal (bypasses confirm prompt message of "yes/"no")
JSch: Is there a way to expose user environment variables to "exec" channel?
Have seen similar questions asked by Informix-4gl developers as they transition to FourJs Genero and using its Web Services functionality. The question I'll put to them is "who owns the fglgo/fglrun process that the Genero Application Server has launched, where is it running from, and what is its environment". If needed, I'll illustrate with a simple program that does something like ...
MAIN
RUN "env > /tmp/myname.txt"
RUN "who >> /tmp/myname.txt"
RUN "pwd >> /tmp/myname.txt"
END MAIN
... and say compare with when program is running from command line. It is normally a case like in the earlier answer of configuring so that the environment is set correctly before the 4gl program is executed.

How Do I Stream Video From My USB Webcam To A Remote HTML Page

I want to create a program that will stream video from my USB webcam over the internet to a web page.
Currently, I use a webservice that when triggered, calls fswebcam to capture an image, save to data store, convert to base64 binary and send that data over to the HTML page where it is rendered into the 'src' attribute of 'img'. The HTML page has JavaScript that calls this service once per second.
As you can tell this is a horrible way to do this. I would rather have a proper stream if I can. But I don't know what technologies are available to achieve this.
The webservice is written in nodeJS. The server is running on a raspberry pi 2. I didn't put this question in the raspberry pi forum because I think it's a general Linux/programming issue.
Use a framework like livecam.
Webcam live-streaming solution using GStreamer and Node.js
This module allows you to stream your webcam over a network to be consumed by yuor browser and/or streamed to a file. See documentation for more inforamtion.
Usage:
// npm install livecam
const LiveCam = require('livecam');
const webcam_server = new LiveCam({
'start' : function() {
console.log('WebCam server started!');
}
});
webcam_server.broadcast();
The article here explains the whole process in the easiest way possible with working images. This is the Linux way of doing so, not any node.js script. I am stating here the main part of that.
Connect with your Pi using the IP address. 'pi' & 'raspberry' is the default 'login as' and 'password' in Raspbian.
To update system type in the command sudo apt-get update and sudo apt-get upgrade one at a time.
Type in the command sudo apt-get install motion to start the installation.
Now to make sure that the camera is correctly detected, type in the command lsusb and enter. You should see the name of your camera. If it is NOT there, then there is some problem in your camera or the camera is not supported in 'motion'.
After the installation is complete, type in the command sudo nano /etc/motion/motion.conf and press enter.
Then you have to change some settings in the .conf file. It might be difficult sometimes to find the settings but use Ctrl+W to find it. So follow the steps:
Make sure 'daemon' is ON.
Set 'framerate' anywhere in between 1000 to 1500.
Keep 'Stream_port' to 8081.
'Stream_quality' should be 100.
Change 'Stream_localhost' to OFF.
Change 'webcontrol_localhost' to OFF.
Set 'quality' to 100.
Set 'width' & 'height' to 640 & 480.
Set 'post_capture' to 5.
Press ctrl + x to exit. Type y to save and enter to conform.
Again type in the command sudo nano /etc/default/motion and press enter.
Set start_motion_daemon to yes. Save and exit.
First of all your have to restart the motion software. To do it type in the command sudo service motion restart and press enter.
Again type in the command sudo motion and press enter. Now your server is ready.
Now open up your browser. Type in the IP address of your raspberry Pi and the port number in this way:
192.168.0.107:8081 (First there is the IP address, then a ':', then the port number). Press Enter.
Now you can see the Live feed coming from your webcam directly on your laptop or mobile or both at the same time. But, this is a local connection. To make it public set up your IP with a public one so that you it can be accessed from anywhere in the world.

Fiware CEP server stops responding‏

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a diferent approach to developing?
If you require more information please let me known.
Thank you
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

Test for Apple Push Notification

I am using node.js (server framework) and mongoose.js (mongo based schema modeling) as the backend for an iOS app and I am using Mocha (test framwork) to make sure everything works.
What I really want to know, and can find no documentation on, is how to test on the server if the push notifications are being appropriately sent. I am using apnagent and at the moment I can see that push notifications are being sent correctly by manually checking my device but I am having difficulty finding an automated way to test that they are working correctly.
That may be enough of a description to answer at a high level what needs to be done. But in case it is not here is the actual code:
Mongoose Model fires off a push notification upon creation:
#this code is called after this model is saved in mongodb
eventModel.post 'save', (doc) ->
#push the message
sendMessageToDevice = (event, token) ->
message =
event_body:
eventId: event._id
lat: event.lngLat[1]
lng: event.lngLat[0]
agent.createMessage()
.device(token)
.alert('New Event! ' + event.description)
.set(message)
.send()
#cycle through the users to push to
#get all the unique device tokens in the database for APN
users.getAllUniqueDeviceTokens (error, devices) ->
if error then return util.handleError error
console.log "Sending push notices to all devices (%d):", devices.length
console.log devices
for token in devices
sendMessageToDevice doc, token
#send some verification here that the code ran correctly???
Then in my Mocha test file I have:
it 'should receive push notification from fort creation', (done) ->
#some logic here to verify that push notifications were sent
done()
In many situations, while writing tests, it is either impossible or simply too dangerous to verify that an action has really taken place (i.e. a push notification has been delivered). Imagine writing a unit test for the rm command where you would like to ensure that doing rm -rf / succeeds. Obviously, you cannot let this action take place and verify that your root partition is indeed empty!
What you can do, however (and should do, really), is verify that whatever commands, routines or other actions necessary to accomplish the task are being invoked correctly, without actually allowing them to take place.
In your particular situation, you do not need to verify that your push notification has been delivered because your application is not responsible for the notification's delivery. However, you can test that the push notification is being correctly delivered to the push server.
So, instead of testing for successful delivery, you test
Whether the outgoing request is properly formatted (i.e. JSON is valid)
Whether it contains the data you expect it to contain (i.e. a field in JSON is present and contains expected data)
Whether the authentication token required by the server is included
Whether the target server is correct (i.e. you are indeed sending the data to xxx.apple.com and not to localhost)
Ideally, these test requests will not even reach the target server - doing so would mean you are relying on two factors that are not always perfectly stable:
network connectivity
target server availability and proper functionality
In the past, I dealt with this so that I first manually issued a correct request, captured the response and then mocked the whole communication in the unit test (using i.e. nock. That way, I am completely in control of the whole communication.
As far as I know, there's no way to check if an APNS request has reached its destination or not. Apple tends to have this "everything's fine, and if it's not, then it should be your fault" policy with us developers. If things haven't changed since I started coding, you make an APNS request by sending raw data (JSON payload, you probably know the whole format) through the 2195 port, and you get absolutely no response for that.
Only thing I can think of, if you have a physical iOS device (an iPod, an iPhone or an iPad), you can "automate" a test by launching a PUSH request with a hardcoded token, corresponding to your device and a test app, and if you receive the notification then it works.
Oh, and if it doesn't work, please make sure you have all required ports open if you're operating behind a firewall. It's the first big stone I stepped into when I first dove into this ;) (related: https://support.apple.com/en-us/HT203609)
I would use a request mocking framework like nock to intercept the request to APN. The urls seem to be located in the code here.

Push notification doesn't work for Passbook from C#

I am trying to achieve Apple Push notification for Passbook from C#. I am not getting any error or exception but the notification is not received on device.
Following are steps already taken
- I am trying it with production pass p12 certificate which I am using for Pass signing
- I am having my tcp port 2196 open as required by APNS
- I am sending empty Json payload and push token (which I am receiving from passbook when user adds pass to passbook and our service is invoked)
When I am trying to read response from the SSLstream I am getting it as \b\a\0\0\0\0
Any help will be great help. Thanks!
You DO need a changeMessage if you want anything to show on the lockscreen, but you could also consider a relevantDate alert if it is always after 7 days. The following answer details all of the mandatory requirements to invoke a lock screen message from a push to a pass. How to make a push notification for a pass.
Specifically note point 5:
alert, badge, sound and custom property keys are all ignored - the
push's only purpose is to notify Passbook that your web service has a
fresh pass. The notification text will be determined by the
changeMessage key in pass.json and the differences between the old and
the new .pkpass bundles

Resources