All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, @PickleRick , | spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name | spath output=Status path=suite{}.testcase{}.status|table suite Testcase Status I... See more...
Hi, @PickleRick , | spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name | spath output=Status path=suite{}.testcase{}.status|table suite Testcase Status I wrote a query like this. but the problem here is in a single row multiple values will come. I want to break these value and print them in different row. Any optionnother than mvexpand?
As it has already said you must escape all special characters! ... | rex "(?P<POH>[^\"]+)" should fix this one. Just do rest with same way. 
Hi Have you read this https://conf.splunk.com/files/2022/slides/PLA1122B.pdf ? I suppose that you can contact Mary in Splunk UG Slack if you are needing some help? r. Ismo
Hi, @ITWhisperer if mvexoand is used the results are truncated and i get a warning message. Any other alternative to mvexpand command is available?
| spath suite{}.testcase{} output=testcase | mvexpand testcase | spath input=testcase | table name status
Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats c... See more...
Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats count(eval(ReasonCode="Full")) as fullCount count(eval(ReasonCode="Ready")) as readyCount by EquipmentName | where fullCount=1 OR readyCount=1  
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Any... See more...
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Anyway, You're doing one thing which in case of your data might be giving proper results but in general is a bad practice. If you have multivalued fields (like your two Testcase and Status fields) you have no guarantee that they will contain entries matching 1-1 with each other. A simple run-anywhere example to demonstrate: | makeresults | eval _raw="[ { \"a\":\"a\",\"b\":\"b\"},{\"a\":\"b\",\"c\":\"c\"},{\"b\":\"d\",\"c\":\"e\"}]" | spath {}.a output=a | spath {}.b output=b | spath {}.c output=c | spath {} output=pairs As you can see, the output in fields a, b and c would be completely different if zipped together than what you get as pairs in the array. That's why you should rather parse out whole separate testcases as json objects with | spath testcase (or whatever path you have there to your test cases) and then parse each of them separately so you don't loose the connection between separate fields within a single testcase.
Thanks @johnhuang , Is that utility applicable on physical servers as well?
Hi @super_edition , the only field present in your search is "kubernetes_cluster" but the field in Label and value is "region". use the same field. Ciao. Giuseppe
OK. If you're using ODBC for Splunk, it executes a saved search and pulls its results into your tool (whatever it is - PowerBI, Excel, anything else). So it's completely independent from the datamod... See more...
OK. If you're using ODBC for Splunk, it executes a saved search and pulls its results into your tool (whatever it is - PowerBI, Excel, anything else). So it's completely independent from the datamodels defined on Splunk's side. So it's up to you to prepare a saved search on Splunk side that will produce the data you'll be pulling with the ODBC driver.
  { "suite": [ { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0... See more...
  { "suite": [ { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0, "time": 0, "errors": 0, "testcase": [ { "classname": "xyz", "name": "foo1", "time": 0, "status": "Passed" }, { "classname": "pqr", "name": "foo2", "time": 0, "status": "Passed" }, . . . ] } ] }   Hi, @ITWhisperer ,Sorry for that, here is the correct formatted JSON data.
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort ... See more...
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster   However when the same query when I have it in dashboard's dropdown it is not returning that data. Search on Change is unchecked. the dropdown looks like this: source view: <input type="dropdown" token="regions" searchWhenChanged="false"> <label>region</label> <fieldForLabel>regions</fieldForLabel> <fieldForValue>regions</fieldForValue> <search> <query>index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster</query> <earliest>0</earliest> <latest></latest> </search> </input>    
You example is not correctly formatted JSON. Please provide a valid representative version of your events.
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       inde... See more...
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       index=app sourcetype=prd_wcs host=EULMFCP1WVND121 "EquipmentStatusRequest\"=" D0022 | eval _raw = replace(_raw, "\\\\", "") | eval _raw = replace(_raw, "\"", "") | rex "Chute:DTT_S01.DA01.(?<Door>[^\,]+)" | rex "EquipmentName:DTT_S01.DA01.(?<EquipmentName>[^\,]+)" | rex "EquipmentType:(?<EquipmentType>[^\,]+)" | rex "Status:(?<EquipmentStatus>[^\,]+)" | rex "TypeOfMessage:(?<TypeOfMessage>[^\}]+)" | eval Code = EquipmentStatus+"-"+TypeOfMessage+"-"+EquipmentType | lookup Cortez_SS_Reasons.csv CODE as Code output STATE as ReasonCode | where ReasonCode = "Ready" OR ReasonCode = "Full" | transaction EquipmentName startswith=(ReasonCode="Full") endswith=(ReasonCode="Ready") | eval latestTS = _time + duration | eval counter=1 | accum counter as Row | table _time latestTS Row ReasonCode | eval latestTS=strftime(latestTS,"%Y-%m-%d %H:%M:%S.%3N")   The script above is showing the following overview as result and the marked line is not correct. I don't know how this is happened. Because, I expect that Transaction function will always take first events starting with "Ready" and ending with "Full"..  Thanks in advance.  
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to m... See more...
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to migrate the installation as it is; then you ll be able to plan to move the data (indexes in a different file system), I don't hint to do this in one step. In other words the best practice is to have in different file systems: / and the operative system, /var, Splunk, Splunk data. Ciao. Giuseppe  
Hi @gcusello ,   In our case: splunk home is /splunk Splunk DB is /splunk/var/lib/splunk Thanks
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","ti... See more...
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","time":0,"status":"Passed"},{"classname":"pqr","name":"foo2)","time":0,"status":"Passed"},....   I want to create a table with Suite testcase_name and Testcase_status as columns. I have a solution using mvexpand command. But when there is large data output gets truncated using mvexpand command.   ....| spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name| spath output=Error path=suite{}.testcase{}.error | spath output=Status path=suite{}.testcase{}.status|search (suite="*") | eval x=mvzip(Testcase,Status) | mvexpand x|eval y=split(x,",")|eval Testcase=mvindex(y,0) | search Testcase IN ("***") | eval suite=mvdedup(suite) |eval Status=mvindex(y,1) |table "Suite" "TestCase" Status   This is the query im using. But the results gets truncated. Is there any alternative for mvexpand so that i can edit the above query ?
I am trying to pull the datasets from Splunk into my Power bi desktop to analyze it. yeah I am fetching it for global search.
Hi @inventsekar  Error is faced with the first rex command.   
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi c... See more...
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi case you can restore the old $SPLUNK_HOME folder and your installation will run exactly as before. It's usual to have different file systems between system application (splunk) and data, what is your situation? what are the $SPLUNK_HOME and $SPLUNK_DB folders' Ciao. Giuseppe