All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, When we use sedcmd command to mask data it is Indexed time extractions and when we use transforms to mask data it is search time extractions. Is it correct?  
We have integrated Jenkins -> Splunk for some time (over a year ago) where Jenkins Console Logs are forwarded to Splunk via the Splunk plugin for Jenkins, https://plugins.jenkins.io/splunk-devops/.  ... See more...
We have integrated Jenkins -> Splunk for some time (over a year ago) where Jenkins Console Logs are forwarded to Splunk via the Splunk plugin for Jenkins, https://plugins.jenkins.io/splunk-devops/.  Console Logs were showing up in Splunk under 2 indices, index=statistics and index=jenkins_console; unfortunately, we stopped seeing logs from Pipeline jobs on the index=jenkins_console but Pipeline jobs do show on index=statistics while Freestyle jobs continue to work for both indicies (data available on both).  Does Splunk Inc. provide support for both https://plugins.jenkins.io/splunk-devops/. and https://splunkbase.splunk.com/app/3332/#/details?  We have reproduced the issue many times over, looked into Jenkins config and Splunk config; any additional suggestions we can look into?  Is Univ    
When I configure a correlation search with an Annotation of MiTRE ATT&CK and create a notable, I don't see any evidence of the Annotation in the notable.    Anyone have any ideas how I can sear... See more...
When I configure a correlation search with an Annotation of MiTRE ATT&CK and create a notable, I don't see any evidence of the Annotation in the notable.    Anyone have any ideas how I can search my platform to report on triggered notables by Mitre Attack?    
At my current position, I took over for someone who didn't take care of Splunk & Enterprise Security. It looked as if it was never configured fully (Just ran through the little beginning wizard and ... See more...
At my current position, I took over for someone who didn't take care of Splunk & Enterprise Security. It looked as if it was never configured fully (Just ran through the little beginning wizard and left it).   I've gotten familiar with making my way around Enterprise Security. But there are some items that were being detected that aren't anymore! It's only detecting inactive accounts. It used to detect much more before I upgraded Splunk Enterprise Security.   After installation, what should be configured? I installed the Security Essentials app and ran through the Data inventory check, it detected some things. How do I tell Enterprise Security to look at those indexes? I'm guessing I need to configure the CIM app? I don't know what are my next steps.
Hi! How can I configure Splunk Universal Forwarder in Linux to use FQDN - basically the result of hostname -f - as hostname automatically, i.e. without "hard-wiring" the FQDN in any of Splunk's conf... See more...
Hi! How can I configure Splunk Universal Forwarder in Linux to use FQDN - basically the result of hostname -f - as hostname automatically, i.e. without "hard-wiring" the FQDN in any of Splunk's configuration files? If no simple configuration to do this, probably there is a way to do it with a script that triggered every time I start Splunk Forwarder? I have been using host = $decideOnStartup in inputs.conf, which pick up the hostname of the machine. However for many distro hostname is just the first part of FQDN. Thank you!
Hi there, I have found issues when using the Send to Mobile action on an alert. If the condition is set to less than 1 or equals 0 then the alert does not create a push notification. ie. if an even... See more...
Hi there, I have found issues when using the Send to Mobile action on an alert. If the condition is set to less than 1 or equals 0 then the alert does not create a push notification. ie. if an event is not generated by 9am where sourcetype=sourcetype=globalscape then create a Send to Mobile action. The push notification is not received. The only workaround i have found is to create a "Log Event" action, create a second alert and create a Send to Mobile on the Event as created in the first alert.   I think this is a bug but any help 
The latest Chrome Stable release of v96 exposed an issue with Single Value and Single Value Radial visualizations, where they aren't properly rendering icons. The icons display as expected in Table ... See more...
The latest Chrome Stable release of v96 exposed an issue with Single Value and Single Value Radial visualizations, where they aren't properly rendering icons. The icons display as expected in Table visualizations as you can see in the attached. SingleValueVisualizationIssue  
HI, Does Splunk Cloud have and DR targets for RPO or RTO in the standard agreement ? I had a look but could not see anything, however, just in case it does any help would be appreciated.
When running the following search for a 24hr period it is always being auto-finalized due to disk usage limit of 100MB. index="app_ABC123" source="/var/abc/appgroup123/logs/app123/stat.log" | stats ... See more...
When running the following search for a 24hr period it is always being auto-finalized due to disk usage limit of 100MB. index="app_ABC123" source="/var/abc/appgroup123/logs/app123/stat.log" | stats count as TotalEvents by TxId | sort TotalEvents desc | where TotalEvents > 100 Is there any way for me to optimize the search so that it doesn't hit the limit?
Hi All, I'm trying to extract 2 fields from _raw but seems to be a bit of struggle I want to extract ERRTEXT and MSGXML, have tried using the option of extraction from Splunk and below are the re... See more...
Hi All, I'm trying to extract 2 fields from _raw but seems to be a bit of struggle I want to extract ERRTEXT and MSGXML, have tried using the option of extraction from Splunk and below are the rex I got, The issue with the below rex for ERRTEXT is that it pulls all the MSGXML content as well.  If there could be regex to extract something after ERRTEXT and MSGXML it would be great  | rex field=_raw "^(?:[^=\n]*=){7}(?P<ERRTEXT>.+)" | rex field=_raw "^(?:[^=\n]*=){8}(?P<MSGXML>.+)" Sample of the data that has been ingested in Splunk, this is data is from Splunk DB connect that is getting pushed over to Splunk  2021-12-09 09:56:00.998, FACILITY_DETAILS="/v1/facilities/XXXX/arrears", FACILITY_ID="101010/", TIMESTAMP="2021-12-09 03:41:06.768342", CORRELATION="414d51204d425032514d30322020xxxda4b", ORIGIN="FROMORIGIIN", ERRCODE="code":"400",", ERRTEXT="detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}", MSGXML="{"errors":[{"id":"3a59de59-8b99-4e4a-abfb-XXXXXX","status":"400","code":"400","title":"days_past_due is invalid","detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}" 2021-12-09 09:56:00.998, FACILITY_DETAILS="/v1/facilities/XXXX/arrears", FACILITY_ID="101010/", TIMESTAMP="2021-12-09 03:41:06.768342", CORRELATION="414d51204d425032514d30322020xxxda4b", ORIGIN="FROMORIGIIN", ERRCODE="code":"400",", ERRTEXT="detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}", MSGXML="{"errors":[{"id":"3a59de59-8b99-4e4a-abfb-XXXXXX","status":"400","code":"400","title":"days_past_due is invalid","detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}"  
I am trying to consume AppDynamics api from .net console application using Restshare. Please help me with the way to use it better as with Rest share I am getting errors. And I am unable map the re... See more...
I am trying to consume AppDynamics api from .net console application using Restshare. Please help me with the way to use it better as with Rest share I am getting errors. And I am unable map the request format of Appdynamic and Restshare. Many thanks in Advance. Regards, Sunitha
I have data receiving through forwarder which has SERVER_NAME with other details and i have another lookup created adding a csv file which holds data as SERVER_NAME, OWNER and REGION. my current das... See more...
I have data receiving through forwarder which has SERVER_NAME with other details and i have another lookup created adding a csv file which holds data as SERVER_NAME, OWNER and REGION. my current dashboard have a filter using SERVER_NAME coming from forwarder and now i need to create filter in dashboard of OWNER and REGION, which are from lookup and not from the data from forwarder. I created the filter for OWNER and REGION and created tokens for them as "$owner_t$" and "$region_t$" which i am using in dashboard data as  | index = XXX  OWNER="$owner_t$" and REGION="$region_t$" when i select these tokens the data on dashboard is not getting filtered and shows as "No results found" Can some one guide me where i am going wrong.  
Hi I'm using the Splunk Add-On for Salesforce in Splunk Cloud and checking for any errors raised by the add-on by using this query index=_internal sourcetype=sfdc:object:log "stanza_name=xxx" And ... See more...
Hi I'm using the Splunk Add-On for Salesforce in Splunk Cloud and checking for any errors raised by the add-on by using this query index=_internal sourcetype=sfdc:object:log "stanza_name=xxx" And for every index run, this error is generated about the cce_plugin_sfdc. Anyone having similar issues? file=plugin.py, func_name=import_plugin_file, code_line_no=63 | [stanza_name=xxx] Module cce_plugin_sfdc aleady exists and it won't be reload, please rename your plugin module if it is required.  
Hi, I'm trying to get wildcard lookups to work using the "lookup" function. I've followed guidance to set up the "Match Type" for the fieldin the lookup definition as per Define a CSV lookup in Splu... See more...
Hi, I'm trying to get wildcard lookups to work using the "lookup" function. I've followed guidance to set up the "Match Type" for the fieldin the lookup definition as per Define a CSV lookup in Splunk Web - Splunk Documentation (I don't have access to transforms.conf) and whatever I try,  adding WILDCARD(foo) makes no difference, as if the feature is not being applied. I've found several posts where people report success, but cannot replicate myself. Lookup example:   foo bar abc 1 *cba* 2   | makeresults | eval foo="x" | lookup mylookup foo x="abc" matches x="*cba*" matches x="ab*" does not match x="dcba" does not match I'd rather not resort to inputlookup subsearches if possible as my applications are quite complex! Splunk Verision: 8.2.2.1 Many Thanks in Advance
Hi All, Having an issue trying to route events to an index by source, posting as a new question as I've not found anything that's helped me understand how /where to configure this. We have events b... See more...
Hi All, Having an issue trying to route events to an index by source, posting as a new question as I've not found anything that's helped me understand how /where to configure this. We have events being streamed to HEC (Token) hosted on a HF, which is then forwarding the events to an Indexer, all events are ending up in the Main index on the Indexer. How can events of the default field Source 'xyz' be sent to a specific Indexer Index 'index_xyz'? I've seen numerous posts about routing to a specific Index using the SourceType but not Source. I know props.conf and transforms.conf are needed but I've not seen any examples for using Source, also I'm unsure whether they should be implemented on the HF or the Indexer... The resoning for using Source for routing to a specific index is that these events are always lsted as the Token Name 'xyz'. TIA Daniel
Hi! We´re looking into deploying Splunk in Azure, and I wonder if anyone has good suggestions to do long term (3 years) cold bucket storage in Azure. We dont need frozen storage. We want to use Pr... See more...
Hi! We´re looking into deploying Splunk in Azure, and I wonder if anyone has good suggestions to do long term (3 years) cold bucket storage in Azure. We dont need frozen storage. We want to use Premium SSD for hot/warm, but managed disks for cold storage becomes really expensive. Can we use for instance Azure File storage, Blob storage or Data Lake for this purpose? SmartStore in AWS or GCP is no option for us. Thanks!
I'm running Splunk Enterprise 8.0.5 on Windows 2016 and looking to upgrade to 8.2.3. We run the following: 2 indexers 1 Search head 1 Master Node [Cluster Master, Deployment Server and License Ma... See more...
I'm running Splunk Enterprise 8.0.5 on Windows 2016 and looking to upgrade to 8.2.3. We run the following: 2 indexers 1 Search head 1 Master Node [Cluster Master, Deployment Server and License Master] We currently are only backing up the index files which is very risky so I need to get the configuration backed up as well.  From reading the documents it seems that generally we only need to backup: $SPLUNK_HOME/etc/ Is there any requirement to backup /var/ or any other folders though? 
my tablular output contains columns/fields like, account_number | colour | team_name |  business_unit I am getting the above output by stats aggregating BY 'account_number'. Some of the events w... See more...
my tablular output contains columns/fields like, account_number | colour | team_name |  business_unit I am getting the above output by stats aggregating BY 'account_number'. Some of the events with the same account_number has null (colour,  team_name and  business_unit) values. So I used , | streamstats last(colour) as colour, last(team_name ) as team_name , last(team_name ) as team_name . to populate from the previous row values. I want streamstats to populate the empty fields with the previous row value, "ONLY IF, the previous row "account_number" is same with the current row".   The issue I am getting now is, lets say. I have three rows with account_number value 0001. and if 4th row has account_number is 0002 and has other three fields (colour,  team_name and  business_unit) empty, it is populating them with the previous 0001 account_number's value , which is incorrect. 
I am trying to apply ML to predict the RAG status for payments based on volumes and processing time using historical data (90 days ). Which approach will be better to implement volume based threshold... See more...
I am trying to apply ML to predict the RAG status for payments based on volumes and processing time using historical data (90 days ). Which approach will be better to implement volume based thresholds and processing time to predict if my current in progress volumes is fine or needs to be alerted
Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we int... See more...
Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we integrated Splunk we are getting high cost in Cloudwatch, and we wanted to know the reason for it. Please let us know if Splunk is using such service for it. Thanks