All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

this is the output of the query :   also, example of my event : browsers: { [+] } coverageResult: { [+] } libraryPath: libs/funnels result: { [-] 82348856: [ [+] ] ... See more...
this is the output of the query :   also, example of my event : browsers: { [+] } coverageResult: { [+] } libraryPath: libs/funnels result: { [-] 82348856: [ [+] ] } summary: { [+] } }
It is working now. Thanks
No, you can't (easily and efficiently) make such "dynamic" extraction. Splunk is very good at dealing with key-value fields, but it doesn't have any notion of "structure" in data. It can parse out j... See more...
No, you can't (easily and efficiently) make such "dynamic" extraction. Splunk is very good at dealing with key-value fields, but it doesn't have any notion of "structure" in data. It can parse out json or xml into flat key-value pairs in several ways (auto_kv, spath/xpath, indexed extractions) but all those methods have some drawbacks as the structure of the data is lost and is only partially retained in field naming. So if you handle json/xml data it's often best idea (if you have the possibility of course) to influence the event-emiting side so that the events are easily parseable and can be processed in Splunk without much overhead. Because your data (which you haven't posted a sample of - shame on you ) most probably contains something like { [... some other part of json ...], "result": { "some_event_id": { [... event data... }, "another_event_id": { [... event data ...] } } } While it would be much better to have it as {    [...]    "result": {          {               "id": "first_id",              [... result details ...]         },       {             "id": "another_id",             [... result details ...]      }    } } It would be much better because then you'd have a static easily accessible field called id Of course from Splunk's point of view if you managed to flatten the events even more (possibly splitting it into several separate ones) would be even better. With this format you have, since it's not getting parsed as a multivalued field, since you don't have an array in your json but separate fields, it's gonna be tough. You might try some clever foreach magic but I can't guarantee success here. An example of such approach is here in the run-anywhere example: | makeresults | eval json="{\"result\":{\"1\":[{\"a\":\"n\"},{\"b\":\"m\"}],\"2\":[{\"a\":\"n\"},{\"b\":\"m\"}]}}" | spath input=json | foreach result.*{}.a [ | eval results=mvappend(results,"<<MATCHSTR>>" . ":" . '<<FIELD>>') ] | mvexpand results | eval resultsexpanded=split(results,":") | eval resultid=mvindex(resultsexpanded,0),resultvalue=mvindex(resultsexpanded,1) | table resultid,resultvalue But as you can see, it's nowhere pretty.
Hi @aasserhifni , did you tried to push apps from the Deployer? the apps not present in the Deployer's $SPLUNK_HOME/etc/shcluster/apps should be removed from the Search Head Cluster. Ciao. Giuseppe
browsers: { [+] } coverageResult: { [+] } libraryPath: libs/funnels result: { [-] 82348856: [ [+] ] } summary: { [+] } }
Can you guess what I am going to say? Please share some anonymised representative sample events so we can see what you are dealing with, preferably in a code block </> to prevent format information ... See more...
Can you guess what I am going to say? Please share some anonymised representative sample events so we can see what you are dealing with, preferably in a code block </> to prevent format information being lost
Hello @gcusello  Sorry for my late response Unfortunately, it is installed only on the search head that is member. It is in  the path /opt/splunk/etc/apps inside it and when I did your solution of ... See more...
Hello @gcusello  Sorry for my late response Unfortunately, it is installed only on the search head that is member. It is in  the path /opt/splunk/etc/apps inside it and when I did your solution of  stopping this search head , removing the folders and restarting this search head, it is still inside the search head apps with its folder inside the cli and this app has no existence on the deployer  
Replace <form> with <form version="1.1"> (optionally) <form version="1.1" theme="dark">
Hello, I've used the upper example and it works just fine, but I have a small notice which I can't pass So might not be related to this subject, but as long as it is in this page..  "This dashbo... See more...
Hello, I've used the upper example and it works just fine, but I have a small notice which I can't pass So might not be related to this subject, but as long as it is in this page..  "This dashboard version is missing. Update the dashboard version in source." So raised question: Where should I add/insert the dashboard tags as outside form tags is not accepted and inside form tags is not accepted too. (Edit Dashboard -> Source) Thank you  
Hello to everyone! I have a Splunk Instance with DMC Every day I see this message in the Erros report:   04-22-2024 03:03:08.599 +0300 ERROR AdminManagerDispatch [56824 TcpChannelThread] - Admin ... See more...
Hello to everyone! I have a Splunk Instance with DMC Every day I see this message in the Erros report:   04-22-2024 03:03:08.599 +0300 ERROR AdminManagerDispatch [56824 TcpChannelThread] - Admin handler 'resource-usage' not found.   What does it mean? How can I fix it?
@deepakc  I checked the  first command :  /opt/splunkforwarder/bin/splunk list deploy-poll  --> It is pointing to the right dns record . Also, I tried " dig <dns record name>. It is showing IP addre... See more...
@deepakc  I checked the  first command :  /opt/splunkforwarder/bin/splunk list deploy-poll  --> It is pointing to the right dns record . Also, I tried " dig <dns record name>. It is showing IP address of new deployment server. I tried your second command. I see the configuration is made in local directory. /opt/splunk/etc/system/local/deploymentclient.conf I tried set deploy-poll as well. Still it is pointing to old server. Previously connection was fine. I removed and re-setup the deployment server on same instance. After this I am facing the issue. Regards, PNV
#On your Forwarders Check this to show what the target is? /opt/splunkforwarder/bin/splunk show deploy-poll On your Forwarders Check this to show what the config is? /opt/splunkforwarder/bin/splun... See more...
#On your Forwarders Check this to show what the target is? /opt/splunkforwarder/bin/splunk show deploy-poll On your Forwarders Check this to show what the config is? /opt/splunkforwarder/bin/splunk btool deploymentclient list --debug It might be that the configuration has been set into the below system local config (/opt/splunkforwarder/etc/system/local/deploymentclient.conf) or sometimes its in a custom app (the above Btool should show you this?) If so then change it to the new address (Ensure firewalls and ports are accessable): /opt/splunkforwarder/bin/splunk set deploy-poll <IP_address/hostname>:<management_port> /opt/splunkforwarder/bin/splunk restart  
Hello I have this query :  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | spath path=result.69991058{} output=testResult | mvexpand te... See more...
Hello I have this query :  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | spath path=result.69991058{} output=testResult | mvexpand testResult | spath input=testResult path=fullName output=test_name | spath input=testResult path=success output=test_outcome | spath input=testResult path=skipped output=test_skipped | spath input=testResult path=time output=test_time | table library testResult test_name test_outcome test_skipped test_time | eval status=if(test_outcome="true", "Passed", if(test_outcome="false", "Failed", if(test_skipped="true", "NotExecuted", ""))) | stats count sum(eval(if(status="Passed", 1, 0))) as passed_tests, sum(eval(if(status="Failed", 1, 0))) as failed_tests , sum(eval(if(status="NotExecuted", 1, 0))) as test_skipped by test_name library test_time | eval total_tests = passed_tests + failed_tests | eval success_ratio=round((passed_tests/total_tests)*100,2) | table library, test_name, total_tests, passed_tests, failed_tests, test_skipped, success_ratio test_time | sort + success_ratio And i'm trying to make its dynamic so i will see results for other numbers than '69991058' How can i make it ? i'm trying with regex but it looks like im doing something wrong since im getting 0 results while in the first query there are results  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | rex field=_raw "result\.(?<number>\d+)\{\}" | spath path="result.number{}" output=testResult | mvexpand testResult | spath input=testResult path=fullName output=test_name | spath input=testResult path=success output=test_outcome | spath input=testResult path=skipped output=test_skipped | spath input=testResult path=time output=test_time | table library testResult test_name test_outcome test_skipped test_time | eval status=if(test_outcome="true", "Passed", if(test_outcome="false", "Failed", if(test_skipped="true", "NotExecuted", ""))) | stats count sum(eval(if(status="Passed", 1, 0))) as passed_tests, sum(eval(if(status="Failed", 1, 0))) as failed_tests , sum(eval(if(status="NotExecuted", 1, 0))) as test_skipped by test_name library test_time | eval total_tests = passed_tests + failed_tests | eval success_ratio=round((passed_tests/total_tests)*100,2) | table library, test_name, total_tests, passed_tests, failed_tests, test_skipped, success_ratio test_time | sort + success_ratio
Hi Splunkers, I am working on creating a column line chart dashboard that shows database lattency . I'm encountering a issue where I'm trying to pass a token value to overlay options for line chart ... See more...
Hi Splunkers, I am working on creating a column line chart dashboard that shows database lattency . I'm encountering a issue where I'm trying to pass a token value to overlay options for line chart representation over a column chart. Here are things currently i have, My Chart and My SPL query:   SPL: index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine | appendcols [search index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | stats max(block_count) as total_blocks by ds_file_path] I need to assign the overlay field value(avg_processing_time_per_block )from the line in SPL: | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine The reason I'm attempting to assign it as a token is that the avg_processing_time_per_block has dynamic values (sometimes it may be 10 or 12 machines data ).instead of rwmini and rwws01. Column has total_blocks value   Or is there any ways to achieve this requirement? Your thoughts on these are highly appreciated. Thank you in advance. Sanjai
Hi. limits.conf on Indexers or simple on SearchHead(s)? Or better both? EDIT: better on Indexers side, since limit =   is for SearchTime from SH to Indexer peer, and 100 is default limit ... See more...
Hi. limits.conf on Indexers or simple on SearchHead(s)? Or better both? EDIT: better on Indexers side, since limit =   is for SearchTime from SH to Indexer peer, and 100 is default limit I also had this "problem" with a ~150 fields JSON, and a simple, [kv] limit = 0 indexed_kv_limit = 0 maxcols = 512 maxchars = 102400 Solved Indexer(s) side Thanks for the trick
Hi All, I have deployed new deployment server  (aws ec2 instance) and updated the existing route53 dns entry to point to this new server. But I see the deployment clients are making connection to ... See more...
Hi All, I have deployed new deployment server  (aws ec2 instance) and updated the existing route53 dns entry to point to this new server. But I see the deployment clients are making connection to old server still. I believe there is  old connection saved at deployment client. Does anyone of you know how to encounter this issue ? Your solution helps me lot please. Regards, PNV
@karthi2809  Please check the below sample XML.  Observe `new_value` token and use in your search.   <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"... See more...
@karthi2809  Please check the below sample XML.  Observe `new_value` token and use in your search.   <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | makeresults | eval applicationName="Test1,Test2,Test3" | eval applicationName=split(applicationName,",") | stats count by applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_value">applicationName IN ("Test1" OR "TEST2" OR "Test3")</set> </condition> <condition> <set token="new_value">applicationName = $BankApp$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> Dropdown Value = $BankApp$ <br/> new_value= $new_value$ </html> </panel> </row> </form>    I hope this will help you. Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
Yes, it was the padding / max cache size that was the culprit. The calculation I did was wrong.   Thank you André
Hi  @mdunnavant , I noticed you were working on passing a token to chart overlay. I'm encountering a similar issue where I'm trying to pass a token value to overlay options for line chart representa... See more...
Hi  @mdunnavant , I noticed you were working on passing a token to chart overlay. I'm encountering a similar issue where I'm trying to pass a token value to overlay options for line chart representation over a column chart. If you've managed to achieve this, could you please share how you made it overlay with a token value? Your insights would be greatly appreciated. My Chart and My SPL query: SPL: index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine | appendcols [search index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | stats max(block_count) as total_blocks by ds_file_path]   I need to assign the overlay field value from the line in SPL: | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine The reason I'm attempting to assign it as a token is that the avg_processing_time_per_block has dynamic values (sometimes it may be 10 or 12).instead of rwmini and rwws01. Thanks In advance,
Thanks In Advance. I am using dropdown values for my requirement. In the dropdown i am using token and getting the values from inputlookup and i will pass the value to splunk query.There are two dro... See more...
Thanks In Advance. I am using dropdown values for my requirement. In the dropdown i am using token and getting the values from inputlookup and i will pass the value to splunk query.There are two dropdown one is application Name another one interface name.If i select values i am getting result .If select ALL and the values shows *.in the splunk query.Instead of * .I want to gey values like OR conditions.If i the token getting * then it showing all the values.But i want to show the values which is comming from inputlookup values both application name and interface name.     When i am selecting ALL my splunk query like this: index=mulesoft environment=PRD (applicationName="*" OR priority IN ("ERROR", "WARN")) | stats values(*) AS * BY correlationId applicationName | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search InterfaceName="*" FileList="*" | sort -timestamp | sort -timestamp I am expecting : index=mulesoft environment=PRD applicationName IN ("Test1" OR "TEST2" OR "Test3") OR priority IN ("ERROR", "WARN") | stats values(*) AS * BY correlationId applicationName | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search InterfaceName IN ("aa" OR "bb" OR "cc") AND FileList="*" | sort -timestamp | sort -timestamp DropDown Code </input><input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup applicationName | sort applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input>