All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks Rich! Is it a bad practice to use a KVStore for automatic lookups since they can get very large?
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query fo... See more...
TC Execution Summary for Last Quarter No. of job runs AUS JER IND ASI August 150 121 110 200 Sept 200 140 150 220 Oct 100 160 130 420 I want to write a query for the above table 
thanks,  I got expected  output
I'm not sure there are best practices around automatic lookups.  There are some for lookups in general, however.  Monitor lookup size (in bytes) to make sure they don't cause the knowledge bundle to ... See more...
I'm not sure there are best practices around automatic lookups.  There are some for lookups in general, however.  Monitor lookup size (in bytes) to make sure they don't cause the knowledge bundle to become too large (2GB).  Large lookups should be blocked from the bundle or converted to KVStore.
In case the field may be in a different order, use multiple rex commands to extract them. | rex "approved=(?<approved>[^,]+)" | rex "from=(?<from>[^,]+)" | rex "until =(?<until>[^,]+)" I hope you s... See more...
In case the field may be in a different order, use multiple rex commands to extract them. | rex "approved=(?<approved>[^,]+)" | rex "from=(?<from>[^,]+)" | rex "until =(?<until>[^,]+)" I hope you see the pattern.
JP I already have a connection to the other app in another part of my python you aren't seeing- this is a *new feature* on a app that I had previously built.  I guess the real question is - is there... See more...
JP I already have a connection to the other app in another part of my python you aren't seeing- this is a *new feature* on a app that I had previously built.  I guess the real question is - is there a way to 1) call Splunk's built in PDF GEN with a SID from an alert action or 2) run a report based on info from an alert action 3) some other method I'm just not thinking of I do have a new working version that uses fpdf to create a pdf based on the xml output of the jobs/{SID}/results API call so if there is no other way I may just have to bite the bullet on that.
Splunk Cloud fully supports SEDCMD.
At some unknown point in the future, Splunk will stop supporting non-compliant Python code.  When that happens, your scripts will fail.
Hi @gwen , let me understand: what are $server_impacted$ and $tentative_number$? are they tokens to pass in a drilldown or what else? Ciao. Giuseppe
Hi @PickleRick , the problem is that if I clone the event assigning the new sourcetype, I'm again in the previous ampasse: if I remove the extra contents I cannot assign the correct host and source,... See more...
Hi @PickleRick , the problem is that if I clone the event assigning the new sourcetype, I'm again in the previous ampasse: if I remove the extra contents I cannot assign the correct host and source, I'll try! Thank you. ciao. Giuseppe
Hi @richgalloway , Pls help me in extracting  the fields from the details value i.e approved=xyz, from=11/17/2023 06:22 AM , until =11/18/2023 12:00 AM, it should not be the event specific ! Deta... See more...
Hi @richgalloway , Pls help me in extracting  the fields from the details value i.e approved=xyz, from=11/17/2023 06:22 AM , until =11/18/2023 12:00 AM, it should not be the event specific ! Details: Approved xyz from 11/17/2023 06:22 AM until 11/18/2023 12:00 AM.   Thanks  
The approach may differ but there are typically two approaches 1) You push whole preconfigured app (for example with already enabled inputs) - the upside is that you can - if needed - selectively up... See more...
The approach may differ but there are typically two approaches 1) You push whole preconfigured app (for example with already enabled inputs) - the upside is that you can - if needed - selectively upgrade it across serverclasses and easier keep track of versions. The downside is that you need to store each copy of the "main" app and separately apply needed config changes to each "instance". 2) You distribute the base app separately and separately distribute app(s) containing default and custom settings. - It's  easier to maintain specific settings for small serverclasses using layering. But if you need to prepare separate configs for separate main app versions, it's getting bloated. But I'm more of a fan of the second approach - split your config into small pieces, isolate them into separate apps and push them selectively where needed. And it has nothing to do with Cloud or on-prem. It's a general idea of maintaining pushed apps.
Hi, looks nice, thanks.
hello,   index=windows_srv EventCode=20005 | stats count by host | search count >= 1 | eval server_impacted = host, tentative_number = count | table server_impacted, tentative_number   an... See more...
hello,   index=windows_srv EventCode=20005 | stats count by host | search count >= 1 | eval server_impacted = host, tentative_number = count | table server_impacted, tentative_number   and im using $server_impacted$ and $tentative_number$ in my correlation search.   then i see in tittle on my incident review : my message on $server_impacted$ instead my message on windowsservername
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can s... See more...
For one of our SQL server running with UF version 9.1.1 I can see a lot of error reporting with event code=4506 with the below message. When i check the application logs for every 60 minutes i can see around 744,252 events and the error message as below so kindly let me know how can i get them fixed. 11/17/2023 06:15:23 AM LogName=Application EventCode=4506 EventType=2 ComputerName=abc.def.xyz SourceName=HealthService Type=Error RecordNumber=xxxxxxxxxx Keywords=Classic TaskCategory=None OpCode=None Message=Splunk could not get the description for this event. Either the component that raises this event is not installed on your local computer or the installation is corrupt. FormatMessage error: Got the following information from this event: AB-Prod Microsoft.SQLServer.Windows.CollectionRule.DatabaseReplica.FileBytesReceivedPerSecond abc\ABC_PROD.WSS_Content_internal_portal_xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}   So how can we get them fixed. Kindly help on the same.
Sure, it is. But it's formally a different command
CLONE_SOURCETYPE makes a clone of the event you have, sets a sourcetype that you provide for it and pushes it back into the front of the processing pipeline. I'm not 100% sure (you'd have to test it... See more...
CLONE_SOURCETYPE makes a clone of the event you have, sets a sourcetype that you provide for it and pushes it back into the front of the processing pipeline. I'm not 100% sure (you'd have to test it) but I'd assume if you overwrote source and host before arriving at the transform cloning the event, you'd have your new host and source applied. * The duplicated events receive index-time transformations & sed commands for all transforms that match its new host, source, or source type. * This means that props.conf matching on host or source will incorrectly be applied a second time. So yep, something like your props.conf but. 1. The set-sourcetype transform would have to use CLONE_SOURCETYPE to recast the sourcetype to your linux_audit 2. You'd have to make sure that your transforms are aplied in proper order (firstly adjust the metadata, then clone sourcetype, finally drop to nullqueue)
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU ... See more...
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU percentage, Memory percentage etc.. Please do suggest how to pull in complete metrics into AppDynamics, Is there any configurations file changes needed or any config changes in AppDynamics UI
Hi @pvarelab, sorry but your question isn't so clear for me: you have an On-Premise DS that you use to deploy Apps to your on-premise Forwarders. At firsrt, I hint to put two Heavy Forwarders as C... See more...
Hi @pvarelab, sorry but your question isn't so clear for me: you have an On-Premise DS that you use to deploy Apps to your on-premise Forwarders. At firsrt, I hint to put two Heavy Forwarders as Concentrators to avoid to open an internet connection from all systems and Splunk Cloud. If you haven't so many clients, you could use one of these HFs as DS. Then in the DS you store all the apps to deploy to the clients and you deploy them based on ServerClasses. Why do you want to manage a precedence in installation? you should deploy already configured apps. The only attention point is to analyze your deploy requirement and design very carefully your ServerClasses. Ciao. Giuseppe
I installed CyberChef on Splunk Enterprise, that's it. I was trying to test out the application on a local machine and an install of Splunk with a "free" license. This did not work at all, so I roll... See more...
I installed CyberChef on Splunk Enterprise, that's it. I was trying to test out the application on a local machine and an install of Splunk with a "free" license. This did not work at all, so I rolled out the app in our Enterprise test environment and there it did work. In other words, I did not really do anything to "fix" it, it just worked once there was a valid enterprice license.