All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So I changed the search with your suggestion and also added another array that its sorting by, but its giving me the same numbers for both read and write. I am looking to show the value of min, max, ... See more...
So I changed the search with your suggestion and also added another array that its sorting by, but its giving me the same numbers for both read and write. I am looking to show the value of min, max, and avg for read and then the same for write and it should be different. This is my current search. index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(values{}) as min max(values{}) as max avg(values{}) as avg by dsnames{} | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2) This is the current output. And this is the JSON format events.      
@AL3Z  $SPLUNK_HOME$/bin/splunk btool inputs list --debug  
I did not understand the difference between the two stanzas can you please explain 
which one should I move to /opt/splunkforwarder/etc/system/local , and edit: /opt/splunkforwarder/etc/system/default/props.conf /opt/splunkforwarder/etc/apps/search/default/props.conf /opt/splunkf... See more...
which one should I move to /opt/splunkforwarder/etc/system/local , and edit: /opt/splunkforwarder/etc/system/default/props.conf /opt/splunkforwarder/etc/apps/search/default/props.conf /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf /opt/splunkforwarder/etc/apps/learned/local/props.conf /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_local/apps/learned/local/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/system/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/search/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/splunk_internal_metrics/default/props.conf /opt/splunkforwarder/var/run/splunk/confsnapshot/baseline_default/apps/SplunkUniversalForwarder/default/props.conf
The monitor stanza in inputs.conf is looking for updates the abc.sh file - something unlikely to happen often. To run a scripted input, use a script stanza   [script://./bin/abc.sh] interval = 500 ... See more...
The monitor stanza in inputs.conf is looking for updates the abc.sh file - something unlikely to happen often. To run a scripted input, use a script stanza   [script://./bin/abc.sh] interval = 500 index = xyz sourcetype = script:abc  
Thanks Rich! Is it a bad practice to use a KVStore for automatic lookups since they can get very large?
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query fo... See more...
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query for the above table 
thanks,  I got expected  output
I'm not sure there are best practices around automatic lookups.  There are some for lookups in general, however.  Monitor lookup size (in bytes) to make sure they don't cause the knowledge bundle to ... See more...
I'm not sure there are best practices around automatic lookups.  There are some for lookups in general, however.  Monitor lookup size (in bytes) to make sure they don't cause the knowledge bundle to become too large (2GB).  Large lookups should be blocked from the bundle or converted to KVStore.
In case the field may be in a different order, use multiple rex commands to extract them. | rex "approved=(?<approved>[^,]+)" | rex "from=(?<from>[^,]+)" | rex "until =(?<until>[^,]+)" I hope you s... See more...
In case the field may be in a different order, use multiple rex commands to extract them. | rex "approved=(?<approved>[^,]+)" | rex "from=(?<from>[^,]+)" | rex "until =(?<until>[^,]+)" I hope you see the pattern.
JP I already have a connection to the other app in another part of my python you aren't seeing- this is a *new feature* on a app that I had previously built.  I guess the real question is - is there... See more...
JP I already have a connection to the other app in another part of my python you aren't seeing- this is a *new feature* on a app that I had previously built.  I guess the real question is - is there a way to 1) call Splunk's built in PDF GEN with a SID from an alert action or 2) run a report based on info from an alert action 3) some other method I'm just not thinking of I do have a new working version that uses fpdf to create a pdf based on the xml output of the jobs/{SID}/results API call so if there is no other way I may just have to bite the bullet on that.
Splunk Cloud fully supports SEDCMD.
At some unknown point in the future, Splunk will stop supporting non-compliant Python code.  When that happens, your scripts will fail.
Hi @gwen , let me understand: what are $server_impacted$ and $tentative_number$? are they tokens to pass in a drilldown or what else? Ciao. Giuseppe
Hi @PickleRick , the problem is that if I clone the event assigning the new sourcetype, I'm again in the previous ampasse: if I remove the extra contents I cannot assign the correct host and source,... See more...
Hi @PickleRick , the problem is that if I clone the event assigning the new sourcetype, I'm again in the previous ampasse: if I remove the extra contents I cannot assign the correct host and source, I'll try! Thank you. ciao. Giuseppe
Hi @richgalloway , Pls help me in extracting  the fields from the details value i.e approved=xyz, from=11/17/2023 06:22 AM , until =11/18/2023 12:00 AM, it should not be the event specific ! Deta... See more...
Hi @richgalloway , Pls help me in extracting  the fields from the details value i.e approved=xyz, from=11/17/2023 06:22 AM , until =11/18/2023 12:00 AM, it should not be the event specific ! Details: Approved xyz from 11/17/2023 06:22 AM until 11/18/2023 12:00 AM.   Thanks  
The approach may differ but there are typically two approaches 1) You push whole preconfigured app (for example with already enabled inputs) - the upside is that you can - if needed - selectively up... See more...
The approach may differ but there are typically two approaches 1) You push whole preconfigured app (for example with already enabled inputs) - the upside is that you can - if needed - selectively upgrade it across serverclasses and easier keep track of versions. The downside is that you need to store each copy of the "main" app and separately apply needed config changes to each "instance". 2) You distribute the base app separately and separately distribute app(s) containing default and custom settings. - It's  easier to maintain specific settings for small serverclasses using layering. But if you need to prepare separate configs for separate main app versions, it's getting bloated. But I'm more of a fan of the second approach - split your config into small pieces, isolate them into separate apps and push them selectively where needed. And it has nothing to do with Cloud or on-prem. It's a general idea of maintaining pushed apps.
Hi, looks nice, thanks.
hello,   index=windows_srv EventCode=20005 | stats count by host | search count >= 1 | eval server_impacted = host, tentative_number = count | table server_impacted, tentative_number   an... See more...
hello,   index=windows_srv EventCode=20005 | stats count by host | search count >= 1 | eval server_impacted = host, tentative_number = count | table server_impacted, tentative_number   and im using $server_impacted$ and $tentative_number$ in my correlation search.   then i see in tittle on my incident review : my message on $server_impacted$ instead my message on windowsservername
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can s... See more...
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can see around 744,252 events and the error message as below so kindly let me know how can i get them fixed. 11/17/2023 06:15:23 AM LogName=Application EventCode=4506 EventType=2 ComputerName=abc.def.xyz SourceName=HealthService Type=Error RecordNumber=xxxxxxxxxx Keywords=Classic TaskCategory=None OpCode=None Message=Splunk could not get the description for this event. Either the component that raises this event is not installed on your local computer or the installation is corrupt. FormatMessage error: Got the following information from this event: AB-Prod Microsoft.SQLServer.Windows.CollectionRule.DatabaseReplica.FileBytesReceivedPerSecond abc\ABC_PROD.WSS_Content_internal_portal_xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}   So how can we get them fixed. Kindly help on the same.