All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When running the following search for a 24hr period it is always being auto-finalized due to disk usage limit of 100MB. index="app_ABC123" source="/var/abc/appgroup123/logs/app123/stat.log" | stats ... See more...
When running the following search for a 24hr period it is always being auto-finalized due to disk usage limit of 100MB. index="app_ABC123" source="/var/abc/appgroup123/logs/app123/stat.log" | stats count as TotalEvents by TxId | sort TotalEvents desc | where TotalEvents > 100 Is there any way for me to optimize the search so that it doesn't hit the limit?
Hi All, I'm trying to extract 2 fields from _raw but seems to be a bit of struggle I want to extract ERRTEXT and MSGXML, have tried using the option of extraction from Splunk and below are the re... See more...
Hi All, I'm trying to extract 2 fields from _raw but seems to be a bit of struggle I want to extract ERRTEXT and MSGXML, have tried using the option of extraction from Splunk and below are the rex I got, The issue with the below rex for ERRTEXT is that it pulls all the MSGXML content as well.  If there could be regex to extract something after ERRTEXT and MSGXML it would be great  | rex field=_raw "^(?:[^=\n]*=){7}(?P<ERRTEXT>.+)" | rex field=_raw "^(?:[^=\n]*=){8}(?P<MSGXML>.+)" Sample of the data that has been ingested in Splunk, this is data is from Splunk DB connect that is getting pushed over to Splunk  2021-12-09 09:56:00.998, FACILITY_DETAILS="/v1/facilities/XXXX/arrears", FACILITY_ID="101010/", TIMESTAMP="2021-12-09 03:41:06.768342", CORRELATION="414d51204d425032514d30322020xxxda4b", ORIGIN="FROMORIGIIN", ERRCODE="code":"400",", ERRTEXT="detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}", MSGXML="{"errors":[{"id":"3a59de59-8b99-4e4a-abfb-XXXXXX","status":"400","code":"400","title":"days_past_due is invalid","detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}" 2021-12-09 09:56:00.998, FACILITY_DETAILS="/v1/facilities/XXXX/arrears", FACILITY_ID="101010/", TIMESTAMP="2021-12-09 03:41:06.768342", CORRELATION="414d51204d425032514d30322020xxxda4b", ORIGIN="FROMORIGIIN", ERRCODE="code":"400",", ERRTEXT="detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}", MSGXML="{"errors":[{"id":"3a59de59-8b99-4e4a-abfb-XXXXXX","status":"400","code":"400","title":"days_past_due is invalid","detail":"must be greater than the previously recorded value of 105 days","source":{"pointer":"/data/days_past_due"}}]}"  
I am trying to consume AppDynamics api from .net console application using Restshare. Please help me with the way to use it better as with Rest share I am getting errors. And I am unable map the re... See more...
I am trying to consume AppDynamics api from .net console application using Restshare. Please help me with the way to use it better as with Rest share I am getting errors. And I am unable map the request format of Appdynamic and Restshare. Many thanks in Advance. Regards, Sunitha
I have data receiving through forwarder which has SERVER_NAME with other details and i have another lookup created adding a csv file which holds data as SERVER_NAME, OWNER and REGION. my current das... See more...
I have data receiving through forwarder which has SERVER_NAME with other details and i have another lookup created adding a csv file which holds data as SERVER_NAME, OWNER and REGION. my current dashboard have a filter using SERVER_NAME coming from forwarder and now i need to create filter in dashboard of OWNER and REGION, which are from lookup and not from the data from forwarder. I created the filter for OWNER and REGION and created tokens for them as "$owner_t$" and "$region_t$" which i am using in dashboard data as  | index = XXX  OWNER="$owner_t$" and REGION="$region_t$" when i select these tokens the data on dashboard is not getting filtered and shows as "No results found" Can some one guide me where i am going wrong.  
Hi I'm using the Splunk Add-On for Salesforce in Splunk Cloud and checking for any errors raised by the add-on by using this query index=_internal sourcetype=sfdc:object:log "stanza_name=xxx" And ... See more...
Hi I'm using the Splunk Add-On for Salesforce in Splunk Cloud and checking for any errors raised by the add-on by using this query index=_internal sourcetype=sfdc:object:log "stanza_name=xxx" And for every index run, this error is generated about the cce_plugin_sfdc. Anyone having similar issues? file=plugin.py, func_name=import_plugin_file, code_line_no=63 | [stanza_name=xxx] Module cce_plugin_sfdc aleady exists and it won't be reload, please rename your plugin module if it is required.  
Hi, I'm trying to get wildcard lookups to work using the "lookup" function. I've followed guidance to set up the "Match Type" for the fieldin the lookup definition as per Define a CSV lookup in Splu... See more...
Hi, I'm trying to get wildcard lookups to work using the "lookup" function. I've followed guidance to set up the "Match Type" for the fieldin the lookup definition as per Define a CSV lookup in Splunk Web - Splunk Documentation (I don't have access to transforms.conf) and whatever I try,  adding WILDCARD(foo) makes no difference, as if the feature is not being applied. I've found several posts where people report success, but cannot replicate myself. Lookup example:   foo bar abc 1 *cba* 2   | makeresults | eval foo="x" | lookup mylookup foo x="abc" matches x="*cba*" matches x="ab*" does not match x="dcba" does not match I'd rather not resort to inputlookup subsearches if possible as my applications are quite complex! Splunk Verision: 8.2.2.1 Many Thanks in Advance
Hi All, Having an issue trying to route events to an index by source, posting as a new question as I've not found anything that's helped me understand how /where to configure this. We have events b... See more...
Hi All, Having an issue trying to route events to an index by source, posting as a new question as I've not found anything that's helped me understand how /where to configure this. We have events being streamed to HEC (Token) hosted on a HF, which is then forwarding the events to an Indexer, all events are ending up in the Main index on the Indexer. How can events of the default field Source 'xyz' be sent to a specific Indexer Index 'index_xyz'? I've seen numerous posts about routing to a specific Index using the SourceType but not Source. I know props.conf and transforms.conf are needed but I've not seen any examples for using Source, also I'm unsure whether they should be implemented on the HF or the Indexer... The resoning for using Source for routing to a specific index is that these events are always lsted as the Token Name 'xyz'. TIA Daniel
Hi! We´re looking into deploying Splunk in Azure, and I wonder if anyone has good suggestions to do long term (3 years) cold bucket storage in Azure. We dont need frozen storage. We want to use Pr... See more...
Hi! We´re looking into deploying Splunk in Azure, and I wonder if anyone has good suggestions to do long term (3 years) cold bucket storage in Azure. We dont need frozen storage. We want to use Premium SSD for hot/warm, but managed disks for cold storage becomes really expensive. Can we use for instance Azure File storage, Blob storage or Data Lake for this purpose? SmartStore in AWS or GCP is no option for us. Thanks!
I'm running Splunk Enterprise 8.0.5 on Windows 2016 and looking to upgrade to 8.2.3. We run the following: 2 indexers 1 Search head 1 Master Node [Cluster Master, Deployment Server and License Ma... See more...
I'm running Splunk Enterprise 8.0.5 on Windows 2016 and looking to upgrade to 8.2.3. We run the following: 2 indexers 1 Search head 1 Master Node [Cluster Master, Deployment Server and License Master] We currently are only backing up the index files which is very risky so I need to get the configuration backed up as well.  From reading the documents it seems that generally we only need to backup: $SPLUNK_HOME/etc/ Is there any requirement to backup /var/ or any other folders though? 
my tablular output contains columns/fields like, account_number | colour | team_name |  business_unit I am getting the above output by stats aggregating BY 'account_number'. Some of the events w... See more...
my tablular output contains columns/fields like, account_number | colour | team_name |  business_unit I am getting the above output by stats aggregating BY 'account_number'. Some of the events with the same account_number has null (colour,  team_name and  business_unit) values. So I used , | streamstats last(colour) as colour, last(team_name ) as team_name , last(team_name ) as team_name . to populate from the previous row values. I want streamstats to populate the empty fields with the previous row value, "ONLY IF, the previous row "account_number" is same with the current row".   The issue I am getting now is, lets say. I have three rows with account_number value 0001. and if 4th row has account_number is 0002 and has other three fields (colour,  team_name and  business_unit) empty, it is populating them with the previous 0001 account_number's value , which is incorrect. 
I am trying to apply ML to predict the RAG status for payments based on volumes and processing time using historical data (90 days ). Which approach will be better to implement volume based threshold... See more...
I am trying to apply ML to predict the RAG status for payments based on volumes and processing time using historical data (90 days ). Which approach will be better to implement volume based thresholds and processing time to predict if my current in progress volumes is fine or needs to be alerted
Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we int... See more...
Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we integrated Splunk we are getting high cost in Cloudwatch, and we wanted to know the reason for it. Please let us know if Splunk is using such service for it. Thanks
Hi I have 4 huge log file that ingest into the Splunk File1 File2 File3 File4   Now i want to know when i search specific string that only exist in the file1, what will be happen? What happe... See more...
Hi I have 4 huge log file that ingest into the Splunk File1 File2 File3 File4   Now i want to know when i search specific string that only exist in the file1, what will be happen? What happens in the search process, for example if i exclude file2,3,4, does it effect in my search performance? Or Splunk automatically ignore them because they have not contain that string.   Any idea?   Thanks
RAWDATA: user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logout 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 m... See more...
RAWDATA: user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logout 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machine1 logout 12/9/2021 8:22   From the above Data, I am trying to retrieve the individual session duration for each user and machine and put it in a chart. I have the query which renders the aggregate of the session for machine and user. .  user_name machine_name event_name logon_time logout_time session_duration user1 machine1 logon 12/9/2021 7:20 12/9/2021 7:22 1:01:51     However, I am trying to retireve the session duration for each login that happened anytime.  Desired result is : user_name machine_name event_name logon_time logout_time Sesssion_duration user1 machine1 logon 12/9/2021 7:20 12/9/2021 7:22 0:01:51 user2 machine1 logon 12/9/2021 8:20 12/9/2021 8:22 0:01:51   Could someone pls help me correct my query to get the each session by logon and logout events.  TIA my query :  index=foobar sourcetype=foo source=bar | dedup event_time | table machine_name, user_name, event_name, event_time | streamstats current=f last(event_time) as logout_time by machine_name | table , machine_name, user_name, event_name, event_time, logout_time | where event_name="LOGON" and logout_time!="" | eval type=typeof(logout_time) |eval logon_time=event_time | convert timeformat="%Y-%m-%d %H:%M:%S" mktime(logon_time) as assigned_at | convert timeformat="%Y-%m-%d %H:%M:%S" mktime(logout_time) as released_at | eval session_duration=(released_at-assigned_at) | eval session_duration=tostring(session_duration, "duration") | table user_type, user_name, site_id, machine_name, event_name, logon_time, logout_time, session_duration
Team, I'm newbie in writing Splunk queries. Could you please provide me guidance how to design a SPL for below use case. Here are sample logs: AIRFLOW_CTX_DAG_OWNER=Prathibha AIRFLOW_CTX_DAG_ID=M... See more...
Team, I'm newbie in writing Splunk queries. Could you please provide me guidance how to design a SPL for below use case. Here are sample logs: AIRFLOW_CTX_DAG_OWNER=Prathibha AIRFLOW_CTX_DAG_ID=M_OPI_NPPV_NPPES AIRFLOW_CTX_TASK_ID=NPPES_INSERT AIRFLOW_CTX_EXECUTION_DATE=2021-12-08T18:57:24.419709+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2021-12-08T18:57:24.419709+00:00 [2021-12-08 19:12:59,923] {{cursor.py:696}} INFO - query: [INSERT OVERWRITE INTO IDRC_OPI_DEV.CMS_BDM_OPI_NPPES_DEV.OH_IN_PRVDR_NPPES SELEC...] [2021-12-08 19:13:13,514] {{cursor.py:720}} INFO - query execution done [2021-12-08 19:13:13,570] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,570] {{snowflake.py:277}} INFO - Rows affected: 1 [2021-12-08 19:13:13,592] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,592] {{snowflake.py:278}} INFO - Snowflake query id: 01a0d120-0000-12da-0000-0024028474a6 [2021-12-08 19:13:13,612] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,612] {{snowflake.py:277}} INFO - Rows affected: 7019070 [2021-12-08 19:13:13,632] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,632] {{snowflake.py:278}} INFO - Snowflake query id: 01a0d120-0000-12ce-0000-002402848486 [2021-12-08 19:13:13,811] {{taskinstance.py:1192}} INFO - Marking task as SUCCESS. dag_id=M_OPI_NPPV_NPPES, task_id=NPPES_INSERT, execution_date=20211208T185724, start_date=20211208T191256, end_date=20211208T191313 [2021-12-08 19:13:13,868] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,867] {{local_task_job.py:146}} INFO - Task exited with return code Expected output in tabular form: DAG_ID                                      TASK_ID                       STATUS                      ROWS_EFFECTED M_OPI_NPPV_NPPES         NPPES_INSERT         SUCCESS                  1,7019070 Thanks, Sumit
Hi,  I am currently working in a new environment where I am trying to do field extraction based of pipe delimiter. 1) A new app (say my_app) with only inputs.conf is pushed onto the target uf throu... See more...
Hi,  I am currently working in a new environment where I am trying to do field extraction based of pipe delimiter. 1) A new app (say my_app) with only inputs.conf is pushed onto the target uf through the deployment server.       inputs.conf: [monitor:///path1/file1] index=my_index soyrcetype=my_st       2) Data is getting ingested and the requirement is to do field extraction on all the events separated by pipe delimiter (12345|2021-09-12 11:12:34 345|INFO|blah|blah|blah blah) My approach followed 1) Create a new app (plain folder my_app) on my deployer and push it to the search heads with below conf files I felt it was simple to achieve and did this. somehow it's not working. Did I miss any step to link the app on forwarder and the shc?       ls my_app/default/ app.conf props.conf transforms.conf props.conf [my_st] REPORT-getfields = getfields transforms.conf [getfields] DELIMS = "|" FIELDS = "thread_id","timestamp","loglevel","log_tag","message"        
ERROR OBSERVED TASK [splunk_universal_forwarder : Setup global HEC] *************************** task path: /opt/ansible/roles/splunk_common/tasks/set_as_hec_receiver.yml:4 fatal: [localhost]: FAILE... See more...
ERROR OBSERVED TASK [splunk_universal_forwarder : Setup global HEC] *************************** task path: /opt/ansible/roles/splunk_common/tasks/set_as_hec_receiver.yml:4 fatal: [localhost]: FAILED! => { "cache_control": "private", "changed": false, "connection": "Close", "content_length": "130", "content_type": "text/xml; charset=UTF-8", "date": "Tue, 07 Dec 2021 09:34:20 GMT", "elapsed": 0, "redirected": false, "server": "Splunkd", "status": 401, "url": "https://127.0.0.1:8089/services/data/inputs/http/http", "vary": "Cookie, Authorization", "www_authenticate": "Basic realm=\"/splunk\"", "x_content_type_options": "nosniff", "x_frame_options": "SAMEORIGIN" } MSG: Status code was 401 and not [200]: HTTP Error 401: Unauthorized How I'm adding universal forwarder to my deployment in K8s - name: splunk-forwarder image: splunk/universalforwarder:8.2 env: - name: SPLUNK_START_ARGS value: "--accept-license" - name: ANSIBLE_EXTRA_FLAGS value: "-vv" - name: SPLUNK_CMD value: 'install app /tmp/splunk-creds/splunkclouduf.spl, add monitor /app/logs' - name: SPLUNK_PASSWORD valueFrom: secretKeyRef: name: mia-env-secret key: SPLUNK_UF_PASSWORD resources: {} volumeMounts: - name: splunk-uf-creds-spl mountPath: tmp/splunk-creds - name: logs mountPath: /app/logs There aren't many examples of how to use docker universalforwarder out there, any help or reference to how to containerized version of UF is appreciated.
Hi I'm using this add-on on SplunkCloud to index custom Salesforce objects and using the LastModifiedDate as the query criteria. When I look at the Salesforce queries in  the _internal logs, I see ... See more...
Hi I'm using this add-on on SplunkCloud to index custom Salesforce objects and using the LastModifiedDate as the query criteria. When I look at the Salesforce queries in  the _internal logs, I see Splunk is periodically skipping over some rows For instance, Splunk will send this query to get the next batch of records to index SELECT .. WHERE LastModifiedDate > 2021-11-25T10:10:51.000+0000 ORDER BY LastModifiedDate ASC LIMIT 1000   The FIRST ROW in the result has the LastModifiedDate of 2021-11-26T0910:04.000Z - which I would expect Splunk to use in it's next indexing round. However,  the next entry in the _internal logs sends a different dateTime effectively missing data logged between 9:10:04 and 09:15:33 SELECT ... WHERE LastModifiedDate=2021-11-26T09:15:33.000+0000   I'm making an assumption this is how the Add-On works as I can't find any documentation that explains it. Has anyone had this issue and more importantly, found a fix? I query the _internal logs using this search index=_internal sourcetype=sfdc:object:log "stanza_name=<my stanza>"   Thanks!  
Hi all, I'm new to the back-end configuration of Splunk and I've recently taken over a Splunk instance and I've been tasked with tidying it up a bit. The first thing I noticed is that there is a lot... See more...
Hi all, I'm new to the back-end configuration of Splunk and I've recently taken over a Splunk instance and I've been tasked with tidying it up a bit. The first thing I noticed is that there is a lot of noise coming in from event ID 5156. So I would like to blacklist this particular ID from coming in. As my knowledge is somewhat limited to this, the environment has one Heavy Forwarder, and 3 indexers clustered together. When I try to read the configuration of the Universal Forwarder on the Domain Controller there is no outputs.conf in the C:\Program Files\SplunkUniversalForwarder\etc\system\local directory, so I don't know with assurance where the events are being sent. We have the Splunk Add-on for Microsoft Windows enabled on the HF, indexers and search head. However, I have only made changes to the inputs.conf located in /opt/splunk/etc/apps/splunk_ta_win/local on the HF. I've added the following line: blacklist3 = EventCode="5156" Message="Object Type:(?!\s*groupPolicyContainer)" as blacklist1 and blacklist2 were already present and I couldn't return a search for these events (Meaning they're being filtered), I also restarted the Splunk service. I've just run a search for the past few hours and I'm still seeing 5156 come through. Am I doing anything wrong, or do I need to perhaps make the config changes on the Indexers as well? Currently the config for security index looks like this: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = EventCode="5156" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=true The other thing that has me confused, is the 5156 events being returned are coming from "XmlWinEventLog:Security" and not "WinEventLog:Security", does Splunk automatically add Xml to the front of the index name is renderXml=true, or was that configured prior? I can't see any Xml event stanzas in this file. If anyone can direct me on what i'm doing wrong, that would be great. All the Splunk instances I'm referring to are on CentOS, and they're all running 7.3.0. Upgrading to 8 is in the pipeline. Am I looking in the completely wrong area? IE Outside of the app name? At this point intime I still cannot determine the configuration on the Universal Forwarders and we're there being sent as the outputs.conf doesn't exist.
How does someone get approval from Splunk to reproduce Splunk intellectual property in training content. The use case is -- create a demonstration video that demonstrates integrating Splunk DB Connec... See more...
How does someone get approval from Splunk to reproduce Splunk intellectual property in training content. The use case is -- create a demonstration video that demonstrates integrating Splunk DB Connect with another software product. The video would display  the Splunk logo, trademarks, and its UI.