All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Yes I have been able to find a good way to do it. I wanted to write a solution post for this topic but I never had chance. I’ll do it providing all the steps and config. To summarize the way ... See more...
Hello, Yes I have been able to find a good way to do it. I wanted to write a solution post for this topic but I never had chance. I’ll do it providing all the steps and config. To summarize the way I found is: 1- in Azure AKS in diagnostic settings (if I remember well) you can decide to spool the logs you need into a Storage Account or a Streaming Service. If you don’t need real time go with Storage Account that is cheaper.  2- you then read with Microsoft TA from that Storage Account every 5 minutes 3- you set-up a policy to cancel data older than 7 days from your Storage Account. Retention policy can be adjusted as per your preference, but here act mostly like a buffer. In this way the cost will be under control. Also, about the REST API billing I didn’t see much of a difference honestly. 4- the Microsoft TA modular input seems having a bug. Basically scheduling it every 5 minutes after several hours it stopped working. As a workaround I downloaded an app with an SPL command that allows you to reload the endpoint you want. I embedded it into a scheduled search that run every 5 minutes, keeping the modular input every hour. In this way it is the scheduled report that trigger the data download. Schedule frequency need to be higher than the time it takes to download your data from the Storage Account and then parse them 5- once you download the data you then have to parse removing the unwanted data. Unfortunately it is a JSON into another JSON, and you need the nested one. I did this for AKS audit but probably can be easily adjusted for other typology of logs As soon as I have some time I will provide the config as well. Best Regards, Edoardo
This would be helpful, but where do i place this in the below query? index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument ... See more...
This would be helpful, but where do i place this in the below query? index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag
There are charting options you could try but with long legends this still may not be enough charting.legend.labelStyle.overflowMode (ellipsisEnd | ellipsisMiddle | ellipsisNone | ellipsisStart) ... See more...
There are charting options you could try but with long legends this still may not be enough charting.legend.labelStyle.overflowMode (ellipsisEnd | ellipsisMiddle | ellipsisNone | ellipsisStart) ellipsisMiddle Determines how to display labels that overflow layout bounds by replacing elided text with an ellipsis (...). ellipsisStart: Elides text at the start. ellipsisMiddle: Elides text in the middle of the line. ellipsisEnd: Elides text at the layout boundary. ellipsisNone: Disables text truncation entirely.
<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> ... See more...
<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually se... See more...
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually set ellipsis to none, but i am not sure where to place this in search query   Thanks
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set tok... See more...
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule?... See more...
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule? Like the example I put in my original post. Thanks, S.
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case any... See more...
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case anything I am missing here however this worked for my system.   ----steps----- 1. Create a symbolic link to init.d: Link the Splunk init script to the /etc/init.d/ directory. sudo ln -s /opt/splunkforwarder/bin/splunk /etc/init.d/splunk 2.Configure Splunk to start at boot using chkconfig: Enable the Splunk service to start on boot. sudo chkconfig splunk on 3.Verify the setting. sudo chkconfig --list splunk You should see an output similar to: splunk 0:off 1:off 2:on 3:on 4:on 5:on 6:off 4. Manually restart the Splunk service
There is not much that can be done apart from placing the legend at the top or bottom of the chart. If you still get truncated names, then look to shorten them in your search.
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy ... See more...
Hi I have a use case that involves copying historical data from a 3-indexer cluster (6 months old) to another machine. I have considered two potential solutions: Stop one of the indexers and copy the index from the source to the destination. The only drawback could be that the data might be incomplete, but this is not a concern as the data is for testing purposes. Given the volume of data, this process could take a significant amount of time. Create a new index using the collect command with the required set of data, then copy the index to the other machine. I believe this would be the best way to implement this use case. However, the downside is that the data volume is quite large and might take a considerable amount of time to execute the SPL in the search head, potentially affecting the performance of the SH. I am unsure if the collect command can handle such a large search and create an index. Is there a limitation on the size of the data when using the collect command? Please advise on how to best handle this scenario. Regards, Pravin
I am looking for hide solution in timechart instead of stats. |timechart count span=1d partial=false
If you don't want them dropped, don't include App_Name IN (*), simples! 
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: i... See more...
How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag Thanks
I am looking for this logic in a timechart. Will this logic work for timechart count ? |timechart count span=1d partial=false
Ok, to simplify it as easy as possible : There are two indexes: INDEXA FieldA FieldB INDEXB FieldA FieldC to create a relation between indexes I need to modify INDEXB.FieldA eval FieldA1 = m... See more...
Ok, to simplify it as easy as possible : There are two indexes: INDEXA FieldA FieldB INDEXB FieldA FieldC to create a relation between indexes I need to modify INDEXB.FieldA eval FieldA1 = mvindex(split(FieldA, ","), 0) and now want to group by FieldA/FieldA1and FieldB and count FieldC
completely different logic than in relational databases. It takes me a time to switch for this "new" one . Ok, one more condition I noticed. Two indexes are linked by field FieldA. The point is... See more...
completely different logic than in relational databases. It takes me a time to switch for this "new" one . Ok, one more condition I noticed. Two indexes are linked by field FieldA. The point is that FieldA in IndexB needs to be converted to : | eval ModA = mvindex(split(FieldA, ","), 0) So the relation one_to_many is IndexA.FieldA = IndexB.ModA is this clear what I am writing about ...
Hi Akermaier,   Were you able to configure SAML assertion decryption in Splunk Cloud ? We are trying to configure the same and got the publicKey.crt file from Splunk support but facing an error wh... See more...
Hi Akermaier,   Were you able to configure SAML assertion decryption in Splunk Cloud ? We are trying to configure the same and got the publicKey.crt file from Splunk support but facing an error while configuring with idp team, error is mentioned below: Verification of SAML assertion using the IDP's certificate provided failed. Error: start node xmlSecNodeSignature not found in document   have you seen this issue if you have configured saml assertion decryption, if yes, how is it resolved ?   Thanks  
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetyp... See more...
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetype=YYYY earliest=-30d latest=now | eval ds_file_path=file_path."\\".file_name | stats avg(ms_per_block) as avg_processing_time by machine ds_file_path | eval avg_processing_time = round(avg_processing_time, 1) ." ms" From the JS part, I gave the alert condition: | search avg_processing_time >= 2 When I run the SPL in Splunk, it returns about 5 to 6 rows of results: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms Machine02  C:\Program Files\Splunk\etc\db\rwo.ds      7.6 ms Machine03  C:\Program Files\Splunk\etc\db\gdb.ds      7.5 ms Machine04  C:\Program Files\Splunk\etc\db\rwo.ds      8.3 ms   However, when I send the conditions result as an alert email, it includes only one row of the result, like the one mentioned below: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms   I need to send all the populated results ( avg_processing_time >= 2) in the email. How can I achieve this? I need some guidance. Here are the alert parameters I used: { 'name': 'machine_latency_' + index, 'action.email.subject.alert': 'Machine latency for the ' + index + ' environment has reached an average processing time ' + ms_threshold + ' milliseconds...', 'action.email.sendresults': 1, 'action.email.message.alert': 'In the ' + index + ' environment, the machine $result.machine$ has reached the average processing time of $result.avg_processing_time_per_block$ .', 'action.email.to': email, 'action.logevent.param.event': '{"session_id": $result.machine$, "user": $result.avg_processing_time_per_block$}', 'action.logevent.param.index': index, 'alert.digest_mode': 0, 'alert.suppress': 1, 'alert.suppress.fields': 'session_id', 'alert.suppress.period': '24h', 'alert_comparator': 'greater than', 'alert_threshold': 0, 'alert_type': 'number of events', 'cron_schedule': cron_expression, 'dispatch.earliest_time': '-30m', 'dispatch.latest_time': 'now', 'description': 'XXXXYYYYYZZZZ', 'search': '|`sw_latency_above_threshold(the_index=' + index + ')`' } I suspect that the email action is not configured to include the entire result set. Could someone help me with how to modify the alert settings or the SPL to ensure that all results are included in the alert email? Thanks in advance for your help!
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute follo... See more...
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute following query it has 2515 events (not to confused with statistics)...  index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* App_Name IN (*) | stats count But in case I comment App_Name IN (*) condition in line no. 4 then it produced 4547 events.   index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* ```App_Name IN (*)``` | stats count My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ? Please note that events which are being dropped, are the ones who don't have "App_Name" in their events. 
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Tha... See more...
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Thanks in advance