All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> ... See more...
<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually se... See more...
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually set ellipsis to none, but i am not sure where to place this in search query   Thanks
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set tok... See more...
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule?... See more...
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule? Like the example I put in my original post. Thanks, S.
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case any... See more...
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case anything I am missing here however this worked for my system.   ----steps----- 1. Create a symbolic link to init.d: Link the Splunk init script to the /etc/init.d/ directory. sudo ln -s /opt/splunkforwarder/bin/splunk /etc/init.d/splunk 2.Configure Splunk to start at boot using chkconfig: Enable the Splunk service to start on boot. sudo chkconfig splunk on 3.Verify the setting. sudo chkconfig --list splunk You should see an output similar to: splunk 0:off 1:off 2:on 3:on 4:on 5:on 6:off 4. Manually restart the Splunk service
There is not much that can be done apart from placing the legend at the top or bottom of the chart. If you still get truncated names, then look to shorten them in your search.
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy ... See more...
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy the index from the source to the destination. The only drawback could be that the data might be incomplete, but this is not a concern as the data is for testing purposes. Given the volume of data, this process could take a significant amount of time. Create a new index using the collect command with the required set of data, then copy the index to the other machine. I believe this would be the best way to implement this use case. However, the downside is that the data volume is quite large and might take a considerable amount of time to execute the SPL in the search head, potentially affecting the performance of the SH. I am unsure if the collect command can handle such a large search and create an index. Is there a limitation on the size of the data when using the collect command? Please advise on how to best handle this scenario. Regards, Pravin
I am looking for hide solution in timechart instead of stats. |timechart count span=1d partial=false
If you don't want them dropped, don't include App_Name IN (*), simples! 
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: i... See more...
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag Thanks
I am looking for this logic in a timechart. Will this logic work for timechart count ? |timechart count span=1d partial=false
Ok, to simplify it as easy as possible : There are two indexes: INDEXA FieldA FieldB INDEXB FieldA FieldC to create a relation between indexes I need to modify INDEXB.FieldA eval FieldA1 = m... See more...
Ok, to simplify it as easy as possible : There are two indexes: INDEXA FieldA FieldB INDEXB FieldA FieldC to create a relation between indexes I need to modify INDEXB.FieldA eval FieldA1 = mvindex(split(FieldA, ","), 0) and now want to group by FieldA/FieldA1and FieldB and count FieldC
completely different logic than in relational databases. It takes me a time to switch for this "new" one . Ok, one more condition I noticed. Two indexes are linked by field FieldA. The point is... See more...
completely different logic than in relational databases. It takes me a time to switch for this "new" one . Ok, one more condition I noticed. Two indexes are linked by field FieldA. The point is that FieldA in IndexB needs to be converted to : | eval ModA = mvindex(split(FieldA, ","), 0) So the relation one_to_many is IndexA.FieldA = IndexB.ModA is this clear what I am writing about ...
Hi Akermaier,   Were you able to configure SAML assertion decryption in Splunk Cloud ? We are trying to configure the same and got the publicKey.crt file from Splunk support but facing an error wh... See more...
Hi Akermaier,   Were you able to configure SAML assertion decryption in Splunk Cloud ? We are trying to configure the same and got the publicKey.crt file from Splunk support but facing an error while configuring with idp team, error is mentioned below: Verification of SAML assertion using the IDP's certificate provided failed. Error: start node xmlSecNodeSignature not found in document   have you seen this issue if you have configured saml assertion decryption, if yes, how is it resolved ?   Thanks  
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetyp... See more...
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetype=YYYY earliest=-30d latest=now | eval ds_file_path=file_path."\\".file_name | stats avg(ms_per_block) as avg_processing_time by machine ds_file_path | eval avg_processing_time = round(avg_processing_time, 1) ." ms" From the JS part, I gave the alert condition: | search avg_processing_time >= 2 When I run the SPL in Splunk, it returns about 5 to 6 rows of results: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms Machine02  C:\Program Files\Splunk\etc\db\rwo.ds      7.6 ms Machine03  C:\Program Files\Splunk\etc\db\gdb.ds      7.5 ms Machine04  C:\Program Files\Splunk\etc\db\rwo.ds      8.3 ms   However, when I send the conditions result as an alert email, it includes only one row of the result, like the one mentioned below: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms   I need to send all the populated results ( avg_processing_time >= 2) in the email. How can I achieve this? I need some guidance. Here are the alert parameters I used: { 'name': 'machine_latency_' + index, 'action.email.subject.alert': 'Machine latency for the ' + index + ' environment has reached an average processing time ' + ms_threshold + ' milliseconds...', 'action.email.sendresults': 1, 'action.email.message.alert': 'In the ' + index + ' environment, the machine $result.machine$ has reached the average processing time of $result.avg_processing_time_per_block$ .', 'action.email.to': email, 'action.logevent.param.event': '{"session_id": $result.machine$, "user": $result.avg_processing_time_per_block$}', 'action.logevent.param.index': index, 'alert.digest_mode': 0, 'alert.suppress': 1, 'alert.suppress.fields': 'session_id', 'alert.suppress.period': '24h', 'alert_comparator': 'greater than', 'alert_threshold': 0, 'alert_type': 'number of events', 'cron_schedule': cron_expression, 'dispatch.earliest_time': '-30m', 'dispatch.latest_time': 'now', 'description': 'XXXXYYYYYZZZZ', 'search': '|`sw_latency_above_threshold(the_index=' + index + ')`' } I suspect that the email action is not configured to include the entire result set. Could someone help me with how to modify the alert settings or the SPL to ensure that all results are included in the alert email? Thanks in advance for your help!
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute follo... See more...
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute following query it has 2515 events (not to confused with statistics)...  index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* App_Name IN (*) | stats count But in case I comment App_Name IN (*) condition in line no. 4 then it produced 4547 events.   index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* ```App_Name IN (*)``` | stats count My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ? Please note that events which are being dropped, are the ones who don't have "App_Name" in their events. 
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Tha... See more...
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Thanks in advance
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic m... See more...
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic monitoring. Can someone help/guide how we can enable MFA in synthetic monitoring.
Finally! I just got the license.
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is ... See more...
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is within a Test Environment. Problem Summary: Despite ensuring that the 'inputs.conf' file matches the configuration specifications defined in the 'inputs.conf.spec' file, the Prometheus data is not being ingested and I am receiving errors, e.g port: Not found in "btool" output (btool does not list the stanza [prometheusrw]) when viewing the inputs.conf file in the config explorer application. Details: Splunk Version: Splunk Enterprise 9.2 (Trial License) Operating System: Ubuntu 22.04 Splunk Application: Prometheus Metrics for Splunk (Latest Version 1.0.1)   inputs.conf.spec  /opt/splunk/etc/apps/modinput_prometheus/README/inputs.conf.spec (Full inputs.conf.spec - https://github.com/lukemonahan/splunk_modinput_prometheus/blob/master/modinput_prometheus/README/inputs.conf.spec As seen in image, the inputs.conf.spec file states there is a port  and maxClients configuration parameters. In the inputs.conf I updated the  /opt/splunk/etc/apps/modinput_prometheus/local/inputs.conf file to include the details below which meet the required formatting above: The inputs.conf file was saved, and the Splunk Server rebooted. After rebooting the input.conf was checked to ensure the config specification where being accepted using the Config Explorer App –  These errors where received for the following configuration parameters: However, other configuration parameters such as index, sourcetype & whitelist  Returned: 'Found in "btool" output. Exists in spec file (Stanza=[prometheusrw]) - and were accepted by Splunk. For some unknown reason, Splunk is not recognising some of the configuration parameters above that are listed within the inputs.conf.spec file, even when formatted accordingly.   Other Information: Prometheus remote-write-exporter details: Splunk Index: skyline_prometheus_metrics Any assistance is appreciated, thank you Splunk Community