All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi ITWhisperer, exactly this very simple elegant solution I needed. Thank you very much. Works fine.
I am really struggling to add my macos data into splunk just like how we can upload the event logs of windows. is there any add-ons that i can install to help me do this? if there is, can anyone expl... See more...
I am really struggling to add my macos data into splunk just like how we can upload the event logs of windows. is there any add-ons that i can install to help me do this? if there is, can anyone explain how to configure it and make it work? 
Hi,  Below is the dashboard query which works fine for EC2 Port Probe events but rest of the events are not displayed in the dashlet. when we check open in search option, we find events in the event... See more...
Hi,  Below is the dashboard query which works fine for EC2 Port Probe events but rest of the events are not displayed in the dashlet. when we check open in search option, we find events in the event column and not in statistics after changing the mode from fast to verbose. please help here. index="aws_generic" source="aws.guardduty" detail.type=Discovery:S3/AnomalousBehavior* | eval newtime=strftime(_time,"%m/%d/%y %H:%M:%S") | rex field=host (?<service>.*):(?<cloudprovider>.*):(?<region>.*):(?<cluster>.*):(?<role>.*):(?<stagingarea>.*) | stats sparkline(count) as history max(newtime) as "event time" by stagingarea detail.region detail.type detail.severity detail.description detail.accountId detail.id | eval times=mvindex(times, 0, 2) | sort - "event time" detail.severity | table "event time","detail.accountId","detail.region","detail.severity","history","detail.type","detail.description" | rename "event time" as "Event Time","detail.accountId" as "AWS Account ID","detail.region" as "AWS Region","detail.type" as "Finding Type","detail.severity" as "Severity","history" as "Event History","detail.description" as "Description"
Hi, I am calculating the difference between two search results  as below. And, sometime the panel takes bit time to return the results, thus the variance is showing false count. Please could you ... See more...
Hi, I am calculating the difference between two search results  as below. And, sometime the panel takes bit time to return the results, thus the variance is showing false count. Please could you suggest ? how to fix Thanks in advance. SPL: | makeresults | eval variance=$MA:result.macoscount$ - $COSMOS:result.cosmacount$ | table variance Issue: middle panel (with blue color) result is "MA to COSMOS value "- COSMOS to P.H.B"  
My environment just moved to JSM for monitoring and solving alerts, and we since have lost a functionality where we could link back to the Splunk Search the alert originated from, when an alert is tr... See more...
My environment just moved to JSM for monitoring and solving alerts, and we since have lost a functionality where we could link back to the Splunk Search the alert originated from, when an alert is triggered and sent to the alert center. I wonder if there is a way to do this with this add-on?
That is puzzling. If I understand correctly, you're talking about the host_regex setting of the monitor input, right? The docs don't say that there is any kind of escaping required. If it is however... See more...
That is puzzling. If I understand correctly, you're talking about the host_regex setting of the monitor input, right? The docs don't say that there is any kind of escaping required. If it is however, it would be great if you posted a docs feedback (there is a form at the bottom of https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf ) describing your situation and how it differs from the described state.
Rather than setting the value to true, set it to the line you want in your search <input type="checkbox" token="LastOne_tkn"> <label>Dedup</label> <choice value="| dedup Attr1">Dedup... See more...
Rather than setting the value to true, set it to the line you want in your search <input type="checkbox" token="LastOne_tkn"> <label>Dedup</label> <choice value="| dedup Attr1">Dedup</choice> <default></default> <initialValue></initialValue> </input> Then use the token in your search index=machinedata $LastOne_tkn$ | table Attr1, Attr2  
I cannot understand why you say you are not getting a "table".  Using the lookup sample you gave and the two code samples @bowesmana gave, these are results from my instance 1. Transpose alone ... See more...
I cannot understand why you say you are not getting a "table".  Using the lookup sample you gave and the two code samples @bowesmana gave, these are results from my instance 1. Transpose alone 2. Transpose + foreach Both are just like table.  Are they not?
It works, I have an IP list based on the specified system name (prod etc). Now how can I associate this list with a search? So that the list of IPs displayed by this query can be attached to dscip ... See more...
It works, I have an IP list based on the specified system name (prod etc). Now how can I associate this list with a search? So that the list of IPs displayed by this query can be attached to dscip | search sourcetype="new" DstIP=(list of above ip)
Rest assured, if I had any suggestions, I would have given them by now.
OK so I'll ask again another way, what output would you like, for example from the 8 lines you shared earlier?
Thanks @ITWhisperer  , it worked 
If I understand your question correctly, you want group matching messages to be displayed as a single string like “file put successfully”, not separately as "Inbound file processed successfully GL102... See more...
If I understand your question correctly, you want group matching messages to be displayed as a single string like “file put successfully”, not separately as "Inbound file processed successfully GL1025pcardBCAXX8595143691007",  "File put Succesfully GL1025pcardBCAXX8595143691007", and so on.  This is a common requirement.  But in addition to unnecessary asterisks in regex's as @ITWhisperer points out, you should group them before performing stats.  Here is the code   | eval message = if(match(message, "File put Succesfully|Successfully created file data|Archive file processed successfully|Summary of all Batch|processed successfully for file name|ISG successful Call|Inbound file processed successfully|ISG successful Call"), "file put successfully", message) | stats values(message) as message   Suppose you have events with the following values of message: message Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardBCAXX8732024191001 Inbound file processed successfully GL1025transBCAXX8277966711002 File put Succesfully GL1025pcardBCAXX8595143691007 File put Succesfully GL1025pcardBCAXX8595144691006 File put Succesfully GL1025pcardBCAXX8732024191001 File put Succesfully GL1025transBCAXX8277966711002 some unmatching value some other unmatching value The result will be message file put successfully some other unmatching value some unmatching value Is this what you are looking for? Here is an emulation that you can play with and compare with real data   | makeresults | eval message = mvappend("Inbound file processed successfully GL1025pcardBCAXX8595143691007", "Inbound file processed successfully GL1025pcardBCAXX8595144691006", "Inbound file processed successfully GL1025pcardBCAXX8732024191001", "Inbound file processed successfully GL1025transBCAXX8277966711002", "File put Succesfully GL1025pcardBCAXX8595143691007", "File put Succesfully GL1025pcardBCAXX8595144691006", "File put Succesfully GL1025pcardBCAXX8732024191001", "File put Succesfully GL1025transBCAXX8277966711002", "some unmatching value", "some other unmatching value") | mvexpand message ``` data emulation above ```    
I have a sc4s deployment running in an ec2 instance. I followed the documentation provided here https://splunk.github.io/splunk-connect-for-syslog/main/.  I have a c# application running inside dock... See more...
I have a sc4s deployment running in an ec2 instance. I followed the documentation provided here https://splunk.github.io/splunk-connect-for-syslog/main/.  I have a c# application running inside docker of the same host where sc4s is running. My application is able to send syslog data on port 514 and the same is visible in Splunk Cloud dashboard under sourcetype as sc4s:fallback I am running the same application in my windows local machine trying to send data to the same port and linux machine ip. Data is sent to the host machine because I can see it in the TCP dump but sc4s is not ingesting the data into the Splunk Cloud.   What should be my next step in debugging. I have tried everything from my side but still not able to figure out what the issue is my sc4s deployment
@ITWhisperer++++ Any suggestions pls?
Cheers Rick,  The regex I ended up is like this (?:.*)\/(\w*). The one you suggested,(?:.*)/(\w*), didn't work.   thanks Alex
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would... See more...
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would like to run query in that table view based on LastOne_tkn condition. LastOne_tkn=TRUE (dedup activated) index=machinedata | dedup Attr1 | table Attr1, Attr2 LastOne_tkn=otherwise (dedup deactivated) index=machinedata | table Attr1, Attr2 Any idea, please?
Thank you for posting mock data emulation.  Obviously the app developers do not implement self-evident semantics and should be cursed. (Not just for Splunk's sake, but for every other developer's san... See more...
Thank you for posting mock data emulation.  Obviously the app developers do not implement self-evident semantics and should be cursed. (Not just for Splunk's sake, but for every other developer's sanity.)  If you have any influence on developers, demand that they change JSON structure to something like   { "browser_id": "0123456", "browsers": { "fullName": "blahblah", "name": "blahblah", "state": 0, "lastResult": { "success": 1, "failed": 2, "skipped": 3, "total": 4, "totalTime": 5, "netTime": 6, "error": true, "disconnected": true }, "launchId": 7 }, "result": [ { "id": 8, "description": "blahblah", "suite": [ "blahblah", "blahblah" ], "fullName": "blahblah", "success": true, "skipped": true, "time": 9, "log": [ "blahblah", "blahblah" ] } ], "summary": { "success": 10, "failed": 11, "error": true, "disconnected": true, "exitCode": 12 } }   That is, isolate the browser_id field into a unique key for browser, results, and summary.  The structure you shared cannot express any more semantic browser_id than one.  But if for some bizarre reason the browser_id needs to be passed along in results because summary is not associated with browser_id in each event, expressly say so with JSON key, like   { "browsers": { "id": "0123456", "fullName": "blahblah", "name": "blahblah", "state": 0, "lastResult": { "success": 1, "failed": 2, "skipped": 3, "total": 4, "totalTime": 5, "netTime": 6, "error": true, "disconnected": true }, "launchId": 7 }, "result": { "id": "0123456", "output": [ { "id": 8, "description": "blahblah", "suite": [ "blahblah", "blahblah" ], "fullName": "blahblah", "success": true, "skipped": true, "time": 9, "log": [ "blahblah", "blahblah" ] } ] }, "summary": { "success": 10, "failed": 11, "error": true, "disconnected": true, "exitCode": 12 } }   Embedding data in JSON key is the worst use of JSON - or any structured data. (I mean, I recently lamented worse offenders, but imaging embedding data in column name in SQL!  The developer will be cursed by the entire world.) This said, if your developer has a gun over your head, or they are from a third party that you have no control over, you can SANitize their data, i.e., making the structure saner using SPL.  But remember: A bad structure is bad not because a programming language has difficulty.  A bad structure is bad because downstream developers cannot determine the actual semantics without reading their original manual.  Do you have their manual to understand what each structure means?  If not, you are very likely to misrepresent their intention, therefore get the wrong result. Caveat: As we are speaking semantics, I want to point out that your illustration uses the plural "browsers" as key name as well as singular "result" as another key name, yet the value of (plural) "browsers" is not an array, while the value of (singular) "result" is an array.  If this is not the true structure, you have changed semantics your developers intended.  The following may lead to wrong output. Secondly, your illustrated data has level 1 key of "0123456" in browsers, an identical level 1 key of "0123456" in result, a matching level 2 id in browsers "0123456", a different level 2 id in result "8".  I assume that all matching numbers are semantically identical and non-matching numbers are semantically different Here, I will give you SPL to interpret their intention as my first illustration, i.e., a single browser_id applies to the entire event.  I will assume that you have Splunk 9 or above so fromjson works. (This can be solved using spath with a slightly more cumbersome quotation manipulation.) Here is the code to detangle the semantic madness.  This code does not require the first line, fields _raw.  But doing so can help eliminate distractions.   | fields _raw ``` to eliminate unusable fields from bad structure ``` | fromjson _raw | eval browser_id = json_keys(browsers), result_id = json_keys(result) | eval EVERYTHING_BAD = if(browser_id != result_id OR mvcount(browser_id) > 1, "baaaaad", null()) | foreach browser_id mode=json_array [eval browsers = json_delete(json_extract(browsers, <<ITEM>>), "id"), result = json_extract(result, <<ITEM>>)] | spath input=browsers | spath input=result path={} output=result | mvexpand result | spath input=result | spath input=summary | fields - -* result_id browsers result summary   This is  the output based on your mock data; to illustrate result[] array, I added one more mock element. browser_id description disconnected error exitCode failed fullName id lastResult.disconnected lastResult.error lastResult.failed lastResult.netTime lastResult.skipped lastResult.success lastResult.total lastResult.totalTime launchId log{} name skipped stats success suite{} time ["0123456"] blahblah true true 12 11 blahblah blahblah 8 true true 2 6 3 1 4 5 7 blahblah blahblah blahblah true 0 true 10 blahblah blahblah 9 ["0123456"] blahblah 9 true true 12 11 blahblah blahblah9 9 true true 2 6 3 1 4 5 7 blahblah 9a blahblah 9b blahblah true 0 true 10 blahblah9a blahblah9b 11 In the table, "id" is from results[]. This is the emulation of expanded mock data.  Here, I decided to not use format=json because this will preserve the pretty print format, also because Splunk will not show fromjson-style fields automatically. (With real data, fromjson-style fields are not used in 9.x.)   | makeresults | eval _raw =" { \"browsers\": { \"0123456\": { \"id\": \"0123456\", \"fullName\": \"blahblah\", \"name\": \"blahblah\", \"state\": 0, \"lastResult\": { \"success\": 1, \"failed\": 2, \"skipped\": 3, \"total\": 4, \"totalTime\": 5, \"netTime\": 6, \"error\": true, \"disconnected\": true }, \"launchId\": 7 } }, \"result\": { \"0123456\": [ { \"id\": 8, \"description\": \"blahblah\", \"suite\": [ \"blahblah\", \"blahblah\" ], \"fullName\": \"blahblah\", \"success\": true, \"skipped\": true, \"time\": 9, \"log\": [ \"blahblah\", \"blahblah\" ] }, { \"id\": 9, \"description\": \"blahblah 9\", \"suite\": [ \"blahblah9a\", \"blahblah9b\" ], \"fullName\": \"blahblah9\", \"success\": true, \"skipped\": true, \"time\": 11, \"log\": [ \"blahblah 9a\", \"blahblah 9b\" ] } ] }, \"summary\": { \"success\": 10, \"failed\": 11, \"error\": true, \"disconnected\": true, \"exitCode\": 12 } } " | spath ``` the above partially emulates index="github_runners" sourcetype="testing" source="reports-tests" ```    
This will do the trick: | mstats avg(cpu_metric.*) as cpu_* WHERE index=<your_metrics_index> by CPU, host | table CPU, host | eventstats max(CPU) as cpu_count by host | table cpu_count, host | e... See more...
This will do the trick: | mstats avg(cpu_metric.*) as cpu_* WHERE index=<your_metrics_index> by CPU, host | table CPU, host | eventstats max(CPU) as cpu_count by host | table cpu_count, host | eval cpu_count=cpu_count+   the data being used is from the add on Link to the splunk add on for Splunk Add-on for Unix and Linux docs 
Hi @SumitSharma , this app isn't certified for Splunk Cloud. In addition ths app doesn't seem to be free (I could be wrong about this!). anyway, you should consider this app as a custom app and mo... See more...
Hi @SumitSharma , this app isn't certified for Splunk Cloud. In addition ths app doesn't seem to be free (I could be wrong about this!). anyway, you should consider this app as a custom app and modify it to remove the part containing scripts that probably will block the upoad in Splunk Cloud. The app ins't accessible so I cannot be more detaied. Ciao. Giuseppe