All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@tscroggins @yuanliu Yes it's really complex to understand below SPL code due to nested commands. could you please brief how is the below code is working? | makeresults | eval ip = split("119.0.6.1... See more...
@tscroggins @yuanliu Yes it's really complex to understand below SPL code due to nested commands. could you please brief how is the below code is working? | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") | eval idx = mvrange(0,4) | foreach ip mode=multivalue [eval sorted_ip = mvappend(sorted_ip, mvjoin(mvmap(idx, printf("%.3d", tonumber(mvindex(split(<<ITEM>>, "."), idx)))), "."))] | eval sorted_ip=mvmap(mvsort(mvmap(sorted_ip, mvjoin(mvmap(split(sorted_ip, "."), substr("00".sorted_ip, -3)), "."))), mvjoin(mvmap(split(sorted_ip, "."), coalesce(nullif(ltrim(sorted_ip, "0"), ""), "0")), "."))  
Adding a by-field of "serial_number" in you final stats will display you chart like this. Similarly, instead of the stats you could do a    | chart count as count ... See more...
Adding a by-field of "serial_number" in you final stats will display you chart like this. Similarly, instead of the stats you could do a    | chart count as count over serial_number by result    and this should give you results ver similar. For an overall Pass/Fail visual across all serial number you can do a stats like this   | stats count as count by result   and the resulting chart shows something like this    
| eval test="Test" | table test passes failures
Great Hopefully this is solved then.
As I posted this question my ubuntu forwarder appeared ! Anyone here could explain me why it seems linux forwarder took longer than windows to appeared ? 
Sure, this is the screenshot: A hint from a colleague put me on track to look into the dashboard's source. It seemed that in a line where an option was set, a hidden character has crept in. I de... See more...
Sure, this is the screenshot: A hint from a colleague put me on track to look into the dashboard's source. It seemed that in a line where an option was set, a hidden character has crept in. I deleted the line, re-entered the text and this solved te problem.
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu sp... See more...
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu splunkfwd : https://www.youtube.com/watch?v=rs6q28xUd-o&t=191s I can see my host in data summary but not in the Forwarder Management : how could you explain it ? I'm thinking about permission maybe so here is :  I also add a deploymentclient.conf in :    /opt/splunkforwarder/etc/system/local/ nano deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri = 192.ipfromserver:8089   Have a great evening 
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```sta... See more...
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```stats dc(serial_number) as throughput|``` stats count(eval(if(result="Fail",1,null()))) as failures count(eval(if(result="Pass",1,null()))) as passes |   This returns a table shown in the capture with failures=215 and passes=350 how can i get these results as two sperate bars in one bar chart? basically want to show the pass/fail rate     sample of the JSON data i am working with: {"serial_number": "30913JC0024EW1482300425", "type": "Test", "result": "Pass", "logs": [ {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}
Hi @TheSteveBennett , don't attach a new question to an existing 8and closed) one because it's difficoult to have an answer, open a new question! Anyway, you have to serch in many inputs.conf files... See more...
Hi @TheSteveBennett , don't attach a new question to an existing 8and closed) one because it's difficoult to have an answer, open a new question! Anyway, you have to serch in many inputs.conf files that you have in the Splunk Receiver (usually an Heavy Forwarder). If you want to use te same port for same logs (same index and sourcetype, you don't need to no anything, Remember that if you want to use a different port, you can do it by GUI, if instead you want to use the same port for a different sourcetype, you have to do it modifying the inputs.conf file and adding also an IP address of the receiver. Ciao. Giuseppe
Hi @parthiban, probably you haven't access to the summary index. Create a new summary_alerts index and use it. Ciao. Giuseppe
I think utilizing a lookup to track timestamps and changes of the onlineStatus by serialNumber would work for this. | search index="XXXX" invoked_component="YYYYY" "Genesys system is available" ... See more...
I think utilizing a lookup to track timestamps and changes of the onlineStatus by serialNumber would work for this. | search index="XXXX" invoked_component="YYYYY" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Egypt" | stats latest(onlineStatus) as latestOnlineStatus, latest(_time) as latestStatusEpoch by serialNumber, Country | lookup host_online_status_tracking serialNumber, Country OUTPUT latestOnlineStatus as previousLatestOnlineStatus, latestStatusEpoch as previousLatestStatusEpoch | inputlookup append=true host_online_status_tracking | stats first(latest*) as latest*, first(previous*) as previous* by serialNumber, Country | outputlookup host_online_status_tracking | where ('latestStatusEpoch'>'previousLatestStatusEpoch' OR isnull(previousLatestStatusEpoch)) AND NOT 'latestOnlineStatus'=='previousLatestOnlineStatus'   The lookup is referenced to pull in the latest onlineStatus and timestamp from the previous time this search was run. You can see that there is an outputlookup that updates the lookup everytime with the most up-to-date data about each unique serialNumber.  The final where clause is what will determine if the alert will fire or not while the lookup itself should stay updated with onlineStatuses. The idea is that the alert should only fire once after the onlineStatus changes since the lookup will be updated as well. Run 1:     Serial_1: Status=Up         Exported to lookup ----> Serial_1: Status=Up Run 2:     Serial_1: Status=Down <----> Previous_From_Lookup=Up         Alert Fires         Exported to lookup ----> Serial_1: Status=Down Run 3:     Serial_1: Status=Down <----> Previous_From_Lookup=Down         Alert doesn't fire since statuses match         Exported to lookup ----> Serial_1: Status=Down Run 4:     Serial_1: Status=Up <----> Previous_From_Lookup=Down         Alert Fires         Exported to lookup ----> Serial_1=Up . . .          That is the idea anyways. Hope this helps!
hI Currently trying to test an HTTP event collector token by directly sending events to the cloud before we use the HEC for a OpenTelemetry Connector, but we are getting stuck at 403 Forbidden error... See more...
hI Currently trying to test an HTTP event collector token by directly sending events to the cloud before we use the HEC for a OpenTelemetry Connector, but we are getting stuck at 403 Forbidden error. Is there something wrong with this curl command?  Not sure if it affects anything but we are still on the Splunk Cloud Classic Screenshots attached, appreciate any help we can get!
Thank you, @VatsalJagani!  That is precisely the sort of information I'd been trying to find.  From what you've stated, I think our app may indeed support SHC. The initial structure of the app was c... See more...
Thank you, @VatsalJagani!  That is precisely the sort of information I'd been trying to find.  From what you've stated, I think our app may indeed support SHC. The initial structure of the app was created using Splunk's Add-on Builder app with inputs defined as modular inputs backed by custom Python code.  So all the configuration is stored as parameters on these inputs and provided via the Splunk web interface. The only other information is stored and read by the app is a bit of state info that lives in StoragePasswords.  The AoB framework provides a helper that provides access to various services such as storage_passwords, and I would assume that it's making REST calls behind the scenes. Anyway, thank you again.  I appreciate the response!
@PickleRick , By any chance vpn / firewall logs ?
Hello darlas, Was just refreshing my knowledge of the ISO 8601 timestamp format, and read your post from 5 years and 9 months ago.  Don't see that anyone ever responded to your question. "I also ... See more...
Hello darlas, Was just refreshing my knowledge of the ISO 8601 timestamp format, and read your post from 5 years and 9 months ago.  Don't see that anyone ever responded to your question. "I also am trying to parse or reformat an ISO 8601 date into something more human friendly. Hope someone can help." ISO 8610 format: | eval newtime=strftime(_time, "%Y-%m-%dT%H:%M:%S.%3N%z") Here is something more human readable friendly without getting to far away from the ISO standard.  Like to change the year with century, %Y, to without century, %y, leave out the T separator and the time zone offset, %z, and add the milliseconds, %3N.  Also, like to add the @ between the date and time strings, but that can be added of removed depending on preference, and horizontal real estate available in the report or dashboard panel. Hope this helps - if you still need help.   | eval newtime=strftime(_time, "%m/%d/%y @ %H:%M:%S.%3N")  
Sorry, can't help you here. I'm not a windows expert.
The total in your case is calculated by the visualization itself (which means your browser). Apparently the JS which calculates the totals treats all your row values as floating point numbers. And fl... See more...
The total in your case is calculated by the visualization itself (which means your browser). Apparently the JS which calculates the totals treats all your row values as floating point numbers. And floating number calculations do tend to lead to ugly rounding errors. You could file a bug on that because it's indeed ugly.
yes, sounds logic You can add idea to document these fields at https://ideas.splunk.com/ideas/new
Maybe could be tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Indexer storage So, for my purposes, i keep on using the sum of "kb" as... See more...
Maybe could be tcp_Kprocessed == Kb received by the receiver as a packet of events kb == the real Kb (compressed) written on Indexer storage So, for my purposes, i keep on using the sum of "kb" as volume of data from UF to Indexers. Yes, it's not documented at all 🤷‍ Thanks!
Seems tcp_kprocessed is total transferred data and kb the volume indexed for that particular event. You may submit support ticket for further information as this doesn't look documented.