All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B... See more...
I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B that do not exist in sourcetype A to create a more detailed look of the events involving the IP. Is there a way to do this without using a lookup table? index=paloalto (sourcetype=sourcetype_B OR sourcetype=sourcetype_A) | eval small_tmp=case(log_type="CORRELATION", src_ip) | eval large_tmp=case(log_type!="CORRELATION", src_ip) | where match(small_tmp, large_tmp) | table field A, field B, field C  
Hi @ririzk, _ssl.c is part of Python, not Splunk. A quick look at a non-specific version of the _ssl.c source code shows that error is returned when a connection is closed unexpectedly. You should c... See more...
Hi @ririzk, _ssl.c is part of Python, not Splunk. A quick look at a non-specific version of the _ssl.c source code shows that error is returned when a connection is closed unexpectedly. You should contact Duo support for more detail.
Hi @newbie77, If an instance of Field1=Start is always the earliest event by uid and Field2=Finish is always the latest event by uid, you can use the stats range() function: | stats range(_time) ... See more...
Hi @newbie77, If an instance of Field1=Start is always the earliest event by uid and Field2=Finish is always the latest event by uid, you can use the stats range() function: | stats range(_time) as duration by uid Otherwise, use the stats min() and max() or earliest() and latest() functions with an eval expression: | stats min(eval(case(Field1=="Start", _time))) as start_time max(eval(case(Field2=="Finish"))) as finish_time by uid | eval duration=finish_time-start_time
Hi Thanks for the feedback. We can have a lot of row, I will have a look at the other app cheers Rob
This is very old, but did anyone ever figure this out? We've had a ticket open for a month now about this exact issue and have chased down every error message and possible conf change. If anyone out ... See more...
This is very old, but did anyone ever figure this out? We've had a ticket open for a month now about this exact issue and have chased down every error message and possible conf change. If anyone out there has a possible solution or suggestion for this that would be awesome!    Thanks everyone!!
I've three search in OR for ex  "order success" "order failed" "offer success"  based on the above 3 statement I can perform search but I want to show the result in as pie chart at per hour basis
I have the following setup with Indexer Discovery + Indexer Cluster + Search Head Cluster: - Deployment Server - 3 X Indexer + Cluster Manager (Indexer Cluster) - Search Head Deployer + Search Hea... See more...
I have the following setup with Indexer Discovery + Indexer Cluster + Search Head Cluster: - Deployment Server - 3 X Indexer + Cluster Manager (Indexer Cluster) - Search Head Deployer + Search Head (Set-up as part of a SHC for possible future scaling up)   For forwarding logs from Cluster Manager, I referred to: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Forwardmanagerdata For forwarding logs from Search Head Cluster nodes, I referred to: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Forwardsearchheaddata I believe forwarding logs from the Deployment Server should be similar to the above.   For indexers belonging to an indexer cluster, I have considered the following: 1. Install UF in each indexer to monitor & forward logs to the indexer cluster (via indexer discovery) 2. Just monitor logs locally and allow each indexer to index its own local logs (without going through the indexer cluster) 3. Configure the indexer to forward the locally monitored logs without indexing, to the indexer cluster. I am not sure if is necessary to ensure that it does not index the same data twice. Unsure on how this would play out. Option 2 seems to be the easiest to achieve, but ideally I would like all logs to go through the indexer cluster for indexing. What should be the best practice for forwarding logs from indexers that are part of the indexer cluster?  
Forgot to say, thank you everyone for the assist.
What I need is for the line that starts with Start: to be the break after line. Start: 14-Jun-24 07:55:05, End: 14-Jun-24 07:56:35, Mode: 5, Status: [-11059] No Good Data For Calculation", Break af... See more...
What I need is for the line that starts with Start: to be the break after line. Start: 14-Jun-24 07:55:05, End: 14-Jun-24 07:56:35, Mode: 5, Status: [-11059] No Good Data For Calculation", Break after the ", but since there are a few ",  and not only the ", how do I get it to break at that last comma?
@gcusello Please can you help? How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on th... See more...
@gcusello Please can you help? How can i prevent truncation of the legend on a classic Splunk dashboard? The output has an ellipsis in the middle of my Legend, but i want to show the full text on the legend. See my query below: index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag     Thanks
Field1=Start Field2=Finish Field1 and Field2 have multiple events with values Start and Finish for a given uid respectively. I want to pick earliest event for Fiield1 and latest event for Field2 a... See more...
Field1=Start Field2=Finish Field1 and Field2 have multiple events with values Start and Finish for a given uid respectively. I want to pick earliest event for Fiield1 and latest event for Field2 and find the duration. Field3=uid which is the common field. ….| transaction uid startswith=“Start” ends with=“Finish” | stats avg(duration) It’s not giving the expected result.
Hello, Yes I have been able to find a good way to do it. I wanted to write a solution post for this topic but I never had chance. I’ll do it providing all the steps and config. To summarize the way ... See more...
Hello, Yes I have been able to find a good way to do it. I wanted to write a solution post for this topic but I never had chance. I’ll do it providing all the steps and config. To summarize the way I found is: 1- in Azure AKS in diagnostic settings (if I remember well) you can decide to spool the logs you need into a Storage Account or a Streaming Service. If you don’t need real time go with Storage Account that is cheaper.  2- you then read with Microsoft TA from that Storage Account every 5 minutes 3- you set-up a policy to cancel data older than 7 days from your Storage Account. Retention policy can be adjusted as per your preference, but here act mostly like a buffer. In this way the cost will be under control. Also, about the REST API billing I didn’t see much of a difference honestly. 4- the Microsoft TA modular input seems having a bug. Basically scheduling it every 5 minutes after several hours it stopped working. As a workaround I downloaded an app with an SPL command that allows you to reload the endpoint you want. I embedded it into a scheduled search that run every 5 minutes, keeping the modular input every hour. In this way it is the scheduled report that trigger the data download. Schedule frequency need to be higher than the time it takes to download your data from the Storage Account and then parse them 5- once you download the data you then have to parse removing the unwanted data. Unfortunately it is a JSON into another JSON, and you need the nested one. I did this for AKS audit but probably can be easily adjusted for other typology of logs As soon as I have some time I will provide the config as well. Best Regards, Edoardo
This would be helpful, but where do i place this in the below query? index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument ... See more...
This would be helpful, but where do i place this in the below query? index=$ss_name$_$nyName_tok$_ sourcetype=plt (Instrument="ZT2" OR Instrument="XY2" OR Instrument="P4") | rex field=Instrument "(Calculated)\.(?<tag>.+)$$" | timechart span=$span$ max(ValueEng) by tag
There are charting options you could try but with long legends this still may not be enough charting.legend.labelStyle.overflowMode (ellipsisEnd | ellipsisMiddle | ellipsisNone | ellipsisStart) ... See more...
There are charting options you could try but with long legends this still may not be enough charting.legend.labelStyle.overflowMode (ellipsisEnd | ellipsisMiddle | ellipsisNone | ellipsisStart) ellipsisMiddle Determines how to display labels that overflow layout bounds by replacing elided text with an ellipsis (...). ellipsisStart: Elides text at the start. ellipsisMiddle: Elides text in the middle of the line. ellipsisEnd: Elides text at the layout boundary. ellipsisNone: Disables text truncation entirely.
<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> ... See more...
<row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually se... See more...
Hi @ITWhisperer , thanks for your reply. I can not shorten this as i need the full length of the legend. Also i have tried to move it to the top or bottom but i get same output. I want to actually set ellipsis to none, but i am not sure where to place this in search query   Thanks
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set tok... See more...
The same solution should work <row rejects="$hide_panel$"> <panel> <table> <search> <done> <condition match="'job.resultCount' == 0"> <set token="hide_panel">true</set> </condition> <condition> <unset token="hide_panel"></unset> </condition> </done> <query>| makeresults | timechart count span=1d partial=f</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule?... See more...
Hi Ryan,   I used the documentation to set the schedule, but that did not answer my question as you can only set one schedule per Health Rule. How would you set multiple schedules to a Health Rule? Like the example I put in my original post. Thanks, S.
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case any... See more...
Ciao Gcusello ,  Yes, I reviewed these links however these steps did not work on Oracle linux 6.x. On google I found below steps which I followed & worked on Oracle 6.x. Let me know in case anything I am missing here however this worked for my system.   ----steps----- 1. Create a symbolic link to init.d: Link the Splunk init script to the /etc/init.d/ directory. sudo ln -s /opt/splunkforwarder/bin/splunk /etc/init.d/splunk 2.Configure Splunk to start at boot using chkconfig: Enable the Splunk service to start on boot. sudo chkconfig splunk on 3.Verify the setting. sudo chkconfig --list splunk You should see an output similar to: splunk 0:off 1:off 2:on 3:on 4:on 5:on 6:off 4. Manually restart the Splunk service
There is not much that can be done apart from placing the legend at the top or bottom of the chart. If you still get truncated names, then look to shorten them in your search.