All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats c... See more...
Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats count(eval(ReasonCode="Full")) as fullCount count(eval(ReasonCode="Ready")) as readyCount by EquipmentName | where fullCount=1 OR readyCount=1  
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Any... See more...
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Anyway, You're doing one thing which in case of your data might be giving proper results but in general is a bad practice. If you have multivalued fields (like your two Testcase and Status fields) you have no guarantee that they will contain entries matching 1-1 with each other. A simple run-anywhere example to demonstrate: | makeresults | eval _raw="[ { \"a\":\"a\",\"b\":\"b\"},{\"a\":\"b\",\"c\":\"c\"},{\"b\":\"d\",\"c\":\"e\"}]" | spath {}.a output=a | spath {}.b output=b | spath {}.c output=c | spath {} output=pairs As you can see, the output in fields a, b and c would be completely different if zipped together than what you get as pairs in the array. That's why you should rather parse out whole separate testcases as json objects with | spath testcase (or whatever path you have there to your test cases) and then parse each of them separately so you don't loose the connection between separate fields within a single testcase.
Thanks @johnhuang , Is that utility applicable on physical servers as well?
Hi @super_edition , the only field present in your search is "kubernetes_cluster" but the field in Label and value is "region". use the same field. Ciao. Giuseppe
OK. If you're using ODBC for Splunk, it executes a saved search and pulls its results into your tool (whatever it is - PowerBI, Excel, anything else). So it's completely independent from the datamod... See more...
OK. If you're using ODBC for Splunk, it executes a saved search and pulls its results into your tool (whatever it is - PowerBI, Excel, anything else). So it's completely independent from the datamodels defined on Splunk's side. So it's up to you to prepare a saved search on Splunk side that will produce the data you'll be pulling with the ODBC driver.
  { "suite": [ { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0... See more...
  { "suite": [ { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0, "time": 0, "errors": 0, "testcase": [ { "classname": "xyz", "name": "foo1", "time": 0, "status": "Passed" }, { "classname": "pqr", "name": "foo2", "time": 0, "status": "Passed" }, . . . ] } ] }   Hi, @ITWhisperer ,Sorry for that, here is the correct formatted JSON data.
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort ... See more...
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster   However when the same query when I have it in dashboard's dropdown it is not returning that data. Search on Change is unchecked. the dropdown looks like this: source view: <input type="dropdown" token="regions" searchWhenChanged="false"> <label>region</label> <fieldForLabel>regions</fieldForLabel> <fieldForValue>regions</fieldForValue> <search> <query>index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster</query> <earliest>0</earliest> <latest></latest> </search> </input>    
You example is not correctly formatted JSON. Please provide a valid representative version of your events.
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       inde... See more...
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       index=app sourcetype=prd_wcs host=EULMFCP1WVND121 "EquipmentStatusRequest\"=" D0022 | eval _raw = replace(_raw, "\\\\", "") | eval _raw = replace(_raw, "\"", "") | rex "Chute:DTT_S01.DA01.(?<Door>[^\,]+)" | rex "EquipmentName:DTT_S01.DA01.(?<EquipmentName>[^\,]+)" | rex "EquipmentType:(?<EquipmentType>[^\,]+)" | rex "Status:(?<EquipmentStatus>[^\,]+)" | rex "TypeOfMessage:(?<TypeOfMessage>[^\}]+)" | eval Code = EquipmentStatus+"-"+TypeOfMessage+"-"+EquipmentType | lookup Cortez_SS_Reasons.csv CODE as Code output STATE as ReasonCode | where ReasonCode = "Ready" OR ReasonCode = "Full" | transaction EquipmentName startswith=(ReasonCode="Full") endswith=(ReasonCode="Ready") | eval latestTS = _time + duration | eval counter=1 | accum counter as Row | table _time latestTS Row ReasonCode | eval latestTS=strftime(latestTS,"%Y-%m-%d %H:%M:%S.%3N")   The script above is showing the following overview as result and the marked line is not correct. I don't know how this is happened. Because, I expect that Transaction function will always take first events starting with "Ready" and ending with "Full"..  Thanks in advance.  
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to m... See more...
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to migrate the installation as it is; then you ll be able to plan to move the data (indexes in a different file system), I don't hint to do this in one step. In other words the best practice is to have in different file systems: / and the operative system, /var, Splunk, Splunk data. Ciao. Giuseppe  
Hi @gcusello ,   In our case: splunk home is /splunk Splunk DB is /splunk/var/lib/splunk Thanks
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","ti... See more...
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","time":0,"status":"Passed"},{"classname":"pqr","name":"foo2)","time":0,"status":"Passed"},....   I want to create a table with Suite testcase_name and Testcase_status as columns. I have a solution using mvexpand command. But when there is large data output gets truncated using mvexpand command.   ....| spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name| spath output=Error path=suite{}.testcase{}.error | spath output=Status path=suite{}.testcase{}.status|search (suite="*") | eval x=mvzip(Testcase,Status) | mvexpand x|eval y=split(x,",")|eval Testcase=mvindex(y,0) | search Testcase IN ("***") | eval suite=mvdedup(suite) |eval Status=mvindex(y,1) |table "Suite" "TestCase" Status   This is the query im using. But the results gets truncated. Is there any alternative for mvexpand so that i can edit the above query ?
I am trying to pull the datasets from Splunk into my Power bi desktop to analyze it. yeah I am fetching it for global search.
Hi @inventsekar  Error is faced with the first rex command.   
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi c... See more...
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi case you can restore the old $SPLUNK_HOME folder and your installation will run exactly as before. It's usual to have different file systems between system application (splunk) and data, what is your situation? what are the $SPLUNK_HOME and $SPLUNK_DB folders' Ciao. Giuseppe
As I said - you have to escape the sensitive characters within the string argument. Which means that instead of single backslash you have to use two backslashes, instead of just a quote you need an e... See more...
As I said - you have to escape the sensitive characters within the string argument. Which means that instead of single backslash you have to use two backslashes, instead of just a quote you need an escaped quote (backslash quote).
And what are you "pulling" from Splunk? Honest question, I have no idea how Splunk ODBC driver works - do you define a search being your data source globally or do you define one every time you call ... See more...
And what are you "pulling" from Splunk? Honest question, I have no idea how Splunk ODBC driver works - do you define a search being your data source globally or do you define one every time you call that data source?
Hi @Real_captain , troubleshooting a rex command is often a difficult task.  Particularly when we dont know what is the issue itself.  to understand the error msg (search command required before "^... See more...
Hi @Real_captain , troubleshooting a rex command is often a difficult task.  Particularly when we dont know what is the issue itself.  to understand the error msg (search command required before "^"...), if you could copy paste a sample log line, that would be great (remove sensitive details like hostnames, ip address, etc).  maybe try this step by step troubleshooting..  first this rex command: | rex "(?P<POH>[^"]+)" | table POH then second this rex command: | rex "\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)" | table POH at last, this rex command: | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)" | table POH  
Hi @gcusello ,   Thanks for the feedback. we are planning to keep exactly same mount points. 1. In that case, if we take backup of /splunk directory and restore it after new OS is build. will that... See more...
Hi @gcusello ,   Thanks for the feedback. we are planning to keep exactly same mount points. 1. In that case, if we take backup of /splunk directory and restore it after new OS is build. will that restore all configuration and data as the original one? - pls answer. 2. Also, we planned to separate data and OS disk and only format the OS disk and once new OS is configured, restore the data disk. Do you think this approach will work? - pls answer   Thanks Sushant
@PickleRick   I am getting below error while using the expression with the rex command:  | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)"   Error in 'SearchParser':... See more...
@PickleRick   I am getting below error while using the expression with the rex command:  | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)"   Error in 'SearchParser': Missing a search command before '^'. Error at position '161' of search query 'search index="events_prod_val_ssip_esa" sourcetype...{snipped} {errorcontext = "(?P<POH>[^"]+)"}'.