All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we int... See more...
Hey team,   we have integrated Splunk in our app, and we are using it for the last few days. And we wanted to know that does Splunk use GetMetricsData API from AWS CloudWatch service? Since we integrated Splunk we are getting high cost in Cloudwatch, and we wanted to know the reason for it. Please let us know if Splunk is using such service for it. Thanks
Hi I have 4 huge log file that ingest into the Splunk File1 File2 File3 File4   Now i want to know when i search specific string that only exist in the file1, what will be happen? What happe... See more...
Hi I have 4 huge log file that ingest into the Splunk File1 File2 File3 File4   Now i want to know when i search specific string that only exist in the file1, what will be happen? What happens in the search process, for example if i exclude file2,3,4, does it effect in my search performance? Or Splunk automatically ignore them because they have not contain that string.   Any idea?   Thanks
RAWDATA: user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logout 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 m... See more...
RAWDATA: user_name machine_name event_name logon_time user1 machine1 logon 12/9/2021 7:20 user1 machine1 logout 12/9/2021 7:22 user1 machine1 logon 12/9/2021 8:20 user1 machine1 logout 12/9/2021 8:22   From the above Data, I am trying to retrieve the individual session duration for each user and machine and put it in a chart. I have the query which renders the aggregate of the session for machine and user. .  user_name machine_name event_name logon_time logout_time session_duration user1 machine1 logon 12/9/2021 7:20 12/9/2021 7:22 1:01:51     However, I am trying to retireve the session duration for each login that happened anytime.  Desired result is : user_name machine_name event_name logon_time logout_time Sesssion_duration user1 machine1 logon 12/9/2021 7:20 12/9/2021 7:22 0:01:51 user2 machine1 logon 12/9/2021 8:20 12/9/2021 8:22 0:01:51   Could someone pls help me correct my query to get the each session by logon and logout events.  TIA my query :  index=foobar sourcetype=foo source=bar | dedup event_time | table machine_name, user_name, event_name, event_time | streamstats current=f last(event_time) as logout_time by machine_name | table , machine_name, user_name, event_name, event_time, logout_time | where event_name="LOGON" and logout_time!="" | eval type=typeof(logout_time) |eval logon_time=event_time | convert timeformat="%Y-%m-%d %H:%M:%S" mktime(logon_time) as assigned_at | convert timeformat="%Y-%m-%d %H:%M:%S" mktime(logout_time) as released_at | eval session_duration=(released_at-assigned_at) | eval session_duration=tostring(session_duration, "duration") | table user_type, user_name, site_id, machine_name, event_name, logon_time, logout_time, session_duration
Team, I'm newbie in writing Splunk queries. Could you please provide me guidance how to design a SPL for below use case. Here are sample logs: AIRFLOW_CTX_DAG_OWNER=Prathibha AIRFLOW_CTX_DAG_ID=M... See more...
Team, I'm newbie in writing Splunk queries. Could you please provide me guidance how to design a SPL for below use case. Here are sample logs: AIRFLOW_CTX_DAG_OWNER=Prathibha AIRFLOW_CTX_DAG_ID=M_OPI_NPPV_NPPES AIRFLOW_CTX_TASK_ID=NPPES_INSERT AIRFLOW_CTX_EXECUTION_DATE=2021-12-08T18:57:24.419709+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2021-12-08T18:57:24.419709+00:00 [2021-12-08 19:12:59,923] {{cursor.py:696}} INFO - query: [INSERT OVERWRITE INTO IDRC_OPI_DEV.CMS_BDM_OPI_NPPES_DEV.OH_IN_PRVDR_NPPES SELEC...] [2021-12-08 19:13:13,514] {{cursor.py:720}} INFO - query execution done [2021-12-08 19:13:13,570] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,570] {{snowflake.py:277}} INFO - Rows affected: 1 [2021-12-08 19:13:13,592] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,592] {{snowflake.py:278}} INFO - Snowflake query id: 01a0d120-0000-12da-0000-0024028474a6 [2021-12-08 19:13:13,612] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,612] {{snowflake.py:277}} INFO - Rows affected: 7019070 [2021-12-08 19:13:13,632] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,632] {{snowflake.py:278}} INFO - Snowflake query id: 01a0d120-0000-12ce-0000-002402848486 [2021-12-08 19:13:13,811] {{taskinstance.py:1192}} INFO - Marking task as SUCCESS. dag_id=M_OPI_NPPV_NPPES, task_id=NPPES_INSERT, execution_date=20211208T185724, start_date=20211208T191256, end_date=20211208T191313 [2021-12-08 19:13:13,868] {{logging_mixin.py:104}} INFO - [2021-12-08 19:13:13,867] {{local_task_job.py:146}} INFO - Task exited with return code Expected output in tabular form: DAG_ID                                      TASK_ID                       STATUS                      ROWS_EFFECTED M_OPI_NPPV_NPPES         NPPES_INSERT         SUCCESS                  1,7019070 Thanks, Sumit
Hi,  I am currently working in a new environment where I am trying to do field extraction based of pipe delimiter. 1) A new app (say my_app) with only inputs.conf is pushed onto the target uf throu... See more...
Hi,  I am currently working in a new environment where I am trying to do field extraction based of pipe delimiter. 1) A new app (say my_app) with only inputs.conf is pushed onto the target uf through the deployment server.       inputs.conf: [monitor:///path1/file1] index=my_index soyrcetype=my_st       2) Data is getting ingested and the requirement is to do field extraction on all the events separated by pipe delimiter (12345|2021-09-12 11:12:34 345|INFO|blah|blah|blah blah) My approach followed 1) Create a new app (plain folder my_app) on my deployer and push it to the search heads with below conf files I felt it was simple to achieve and did this. somehow it's not working. Did I miss any step to link the app on forwarder and the shc?       ls my_app/default/ app.conf props.conf transforms.conf props.conf [my_st] REPORT-getfields = getfields transforms.conf [getfields] DELIMS = "|" FIELDS = "thread_id","timestamp","loglevel","log_tag","message"        
ERROR OBSERVED TASK [splunk_universal_forwarder : Setup global HEC] *************************** task path: /opt/ansible/roles/splunk_common/tasks/set_as_hec_receiver.yml:4 fatal: [localhost]: FAILE... See more...
ERROR OBSERVED TASK [splunk_universal_forwarder : Setup global HEC] *************************** task path: /opt/ansible/roles/splunk_common/tasks/set_as_hec_receiver.yml:4 fatal: [localhost]: FAILED! => { "cache_control": "private", "changed": false, "connection": "Close", "content_length": "130", "content_type": "text/xml; charset=UTF-8", "date": "Tue, 07 Dec 2021 09:34:20 GMT", "elapsed": 0, "redirected": false, "server": "Splunkd", "status": 401, "url": "https://127.0.0.1:8089/services/data/inputs/http/http", "vary": "Cookie, Authorization", "www_authenticate": "Basic realm=\"/splunk\"", "x_content_type_options": "nosniff", "x_frame_options": "SAMEORIGIN" } MSG: Status code was 401 and not [200]: HTTP Error 401: Unauthorized How I'm adding universal forwarder to my deployment in K8s - name: splunk-forwarder image: splunk/universalforwarder:8.2 env: - name: SPLUNK_START_ARGS value: "--accept-license" - name: ANSIBLE_EXTRA_FLAGS value: "-vv" - name: SPLUNK_CMD value: 'install app /tmp/splunk-creds/splunkclouduf.spl, add monitor /app/logs' - name: SPLUNK_PASSWORD valueFrom: secretKeyRef: name: mia-env-secret key: SPLUNK_UF_PASSWORD resources: {} volumeMounts: - name: splunk-uf-creds-spl mountPath: tmp/splunk-creds - name: logs mountPath: /app/logs There aren't many examples of how to use docker universalforwarder out there, any help or reference to how to containerized version of UF is appreciated.
Hi I'm using this add-on on SplunkCloud to index custom Salesforce objects and using the LastModifiedDate as the query criteria. When I look at the Salesforce queries in  the _internal logs, I see ... See more...
Hi I'm using this add-on on SplunkCloud to index custom Salesforce objects and using the LastModifiedDate as the query criteria. When I look at the Salesforce queries in  the _internal logs, I see Splunk is periodically skipping over some rows For instance, Splunk will send this query to get the next batch of records to index SELECT .. WHERE LastModifiedDate > 2021-11-25T10:10:51.000+0000 ORDER BY LastModifiedDate ASC LIMIT 1000   The FIRST ROW in the result has the LastModifiedDate of 2021-11-26T0910:04.000Z - which I would expect Splunk to use in it's next indexing round. However,  the next entry in the _internal logs sends a different dateTime effectively missing data logged between 9:10:04 and 09:15:33 SELECT ... WHERE LastModifiedDate=2021-11-26T09:15:33.000+0000   I'm making an assumption this is how the Add-On works as I can't find any documentation that explains it. Has anyone had this issue and more importantly, found a fix? I query the _internal logs using this search index=_internal sourcetype=sfdc:object:log "stanza_name=<my stanza>"   Thanks!  
Hi all, I'm new to the back-end configuration of Splunk and I've recently taken over a Splunk instance and I've been tasked with tidying it up a bit. The first thing I noticed is that there is a lot... See more...
Hi all, I'm new to the back-end configuration of Splunk and I've recently taken over a Splunk instance and I've been tasked with tidying it up a bit. The first thing I noticed is that there is a lot of noise coming in from event ID 5156. So I would like to blacklist this particular ID from coming in. As my knowledge is somewhat limited to this, the environment has one Heavy Forwarder, and 3 indexers clustered together. When I try to read the configuration of the Universal Forwarder on the Domain Controller there is no outputs.conf in the C:\Program Files\SplunkUniversalForwarder\etc\system\local directory, so I don't know with assurance where the events are being sent. We have the Splunk Add-on for Microsoft Windows enabled on the HF, indexers and search head. However, I have only made changes to the inputs.conf located in /opt/splunk/etc/apps/splunk_ta_win/local on the HF. I've added the following line: blacklist3 = EventCode="5156" Message="Object Type:(?!\s*groupPolicyContainer)" as blacklist1 and blacklist2 were already present and I couldn't return a search for these events (Meaning they're being filtered), I also restarted the Splunk service. I've just run a search for the past few hours and I'm still seeing 5156 come through. Am I doing anything wrong, or do I need to perhaps make the config changes on the Indexers as well? Currently the config for security index looks like this: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist3 = EventCode="5156" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=true The other thing that has me confused, is the 5156 events being returned are coming from "XmlWinEventLog:Security" and not "WinEventLog:Security", does Splunk automatically add Xml to the front of the index name is renderXml=true, or was that configured prior? I can't see any Xml event stanzas in this file. If anyone can direct me on what i'm doing wrong, that would be great. All the Splunk instances I'm referring to are on CentOS, and they're all running 7.3.0. Upgrading to 8 is in the pipeline. Am I looking in the completely wrong area? IE Outside of the app name? At this point intime I still cannot determine the configuration on the Universal Forwarders and we're there being sent as the outputs.conf doesn't exist.
How does someone get approval from Splunk to reproduce Splunk intellectual property in training content. The use case is -- create a demonstration video that demonstrates integrating Splunk DB Connec... See more...
How does someone get approval from Splunk to reproduce Splunk intellectual property in training content. The use case is -- create a demonstration video that demonstrates integrating Splunk DB Connect with another software product. The video would display  the Splunk logo, trademarks, and its UI. 
Given an event log specification of: "{DateTime} Times: Online_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Online_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_T... See more...
Given an event log specification of: "{DateTime} Times: Online_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Online_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM}" which is logged 4 times a day and an example entry like: "2021-12-08 14:31:59 Times: Online_1: CNCT_TM: 2021-12-08 14:47:13.873; LOG_TM: 2021-12-08 14:47:16.387; Online_2: CNCT_TM: 2021-12-08 14:47:49.837; LOG_TM: 2021-12-08 14:47:50.480; Offline_1: CNCT_TM: 2021-12-08 14:48:27.303; LOG_TM: 2021-12-08 14:48:28.927; Offline_2: CNCT_TM: 2021-12-08 14:48:56.673; LOG_TM: 2021-12-08 14:48:58.750" How do I evaluate and graph the time range in Minutes and Seconds (just seconds would be fine for me) between the maximum and minimum times embedded in the 8 times captured in the log entry?  Ultimately, I would like to create an alert if a time range greater than something like 30 minutes were to occur.
Hello, I am wondering what the best way to find a value in one my fields matches what is in a mv field. I cannot use mv expand and a where due to the storage limit I encounter. Is there a way to sho... See more...
Hello, I am wondering what the best way to find a value in one my fields matches what is in a mv field. I cannot use mv expand and a where due to the storage limit I encounter. Is there a way to show that "match" matches with the values in field2, but "miss" would not.   
Hey everyone! I have what I would consider a complex problem, and I was hoping to get some guidance on the best way to handle it. We are attempting to log events from an OpenShift (Kubernetes) envi... See more...
Hey everyone! I have what I would consider a complex problem, and I was hoping to get some guidance on the best way to handle it. We are attempting to log events from an OpenShift (Kubernetes) environment. So far, I've successfully gotten raw logs coming in from Splunk Connect for Kubernetes, into our heavy forwarder via HEC, and then into our indexer. The data that is being ingested has a bunch of metadata about the pod name, container name, etc. from this step. However, the problem is what to do with it from there. In the case of this specific configuration, the individual component logs are getting combined into a single stream, with a few fields of metadata attached to the beginning. After this metadata, the event 100% matches up with what I'd consider a "standard" event, or something Splunk is more used to processing. For example: tomcat-access;console;test;nothing;127.0.0.1 - - [08/Dec/2021:13:25:21 -0600] "GET /idp/status HTTP/1.1" 200 3984 This is first semicolon-delimited, and then space delimited, as follows: "tomcat-access" is the name of the container component that generated the file. "console" indicates the source (console or file name) "test" indicates the environment. "Nothing" indicates the User Token And everything after this semicolon is the real log. In this example, it is a Tomcat-access sourcetype. Compare this to another line in the same log: shib-idp;idp-process.log;test;nothing;2021-12-08 13:11:21,335 - 10.103.10.30 - INFO [Shibboleth-Audit.SSO:283] - 10.103.10.30|2021-12-08T19:10:57.659584Z|2021-12-08T19:11:21.335145Z|sttreic shib-idp is the name of the container component that generated the log idp-process.log is the source file in that component test is the environment nothing is the user token And everything after that last semicolon is the Shibboleth process log. Notably, this part uses pipes as delimiters. The SCK components, as I have them configured now, ship all these sources to "ocp:container:shibboleth" (or something like that). When they are shipped over, metadata is added for the container_name, pod_name, and other CRI-based log data. What I am aiming to do I would like to use the semicolon-delimited parts of the event to tell the heavy forwarder what sourcetypes to work with. Ideally, I would like to cut down on having to make my own sourcetypes and regex, but I can do that if I must. So for the tomcat-access example above, I'd want: All the SCK / Openshift related fields to stick with the event. The event to be chopped up into 5 segments The event type to be recognized by the first 2 fields (there is some duplication in the first field, so the second field would be the most important) The first 4 segments to be appended as field information (like "identifier" or "internal_source") The 5th segment to be exported to another sourcetype for further processing (in this case, "tomcat_localhost_access" from "Splunk_TA_tomcat"). All the other fields would stick with the system as Splunk_TA_tomcat did its field extractions. If this isn't possible, I could make a unique sourcetype transform for each event type - the source program has 8 potential sources. But that would involve quite a bit of duplication.   Even as I type this out, I'm getting the sinking feeling that I'll need to just bite the bullet and make 8 different transforms. But one can hope, right?   Any help would be appreciated. I've gotten through Sysadmin and data admin training, but nothing more advanced than that. I suspect I'll need to use this pattern in the future for other Openshift logs of ours, but I don't know at this stage.
Is there any way to have the Message area show below the Included results? I have a rather lengthy but important reference that requires the long-time recipients to scroll down through it each time ... See more...
Is there any way to have the Message area show below the Included results? I have a rather lengthy but important reference that requires the long-time recipients to scroll down through it each time the email is generated to see the results. Using the footer area is not an option as I don't have access to it and also do not want it showing up on any other email alert. Cheers!
We are not getting data in itsi_tracked_alerts and itsi_grouped_alerts .  We are not getting data in itsi_summary index  
Hello, I have a question about RUM, the new MINT replacement. And hopefully, this is the correct board. Our project is separated into two parts, server and client-side. The latter is an Android a... See more...
Hello, I have a question about RUM, the new MINT replacement. And hopefully, this is the correct board. Our project is separated into two parts, server and client-side. The latter is an Android application. In the past, we successfully implemented MINT into the app. The main use is receiving crash alerts, monitoring handled exceptions, and sending logs. Our server-side uses Splunk Enterprise for hosting their logs. In the past, we successfully managed to connect MINT to Splunk Enterprise via Data Collector and centralized all logs. It could be a Forwarder involved. The setup was done by a colleague who is not in the company anymore. Centralizing helps our cross teams search with ease just in one place. Do you know if a similar thing can be done with RUM? Can all the data collected by RUM be sent to Enterprise? I'm not very familiar with Splunk capabilities and any help will be appreciated! Thank you!
What is the best way to collect logs from Cloudflare? Once I'm not a AWS customer, I understand the app https://splunkbase.splunk.com/app/5114/  is not a option. Am I right? Thank in advance for you... See more...
What is the best way to collect logs from Cloudflare? Once I'm not a AWS customer, I understand the app https://splunkbase.splunk.com/app/5114/  is not a option. Am I right? Thank in advance for your help.
I have an alert that logs an event and sends an email. I am trying to add the timestamp of the event to the Log Event action, but it is not being added to the log event. The timestamp is correct in t... See more...
I have an alert that logs an event and sends an email. I am trying to add the timestamp of the event to the Log Event action, but it is not being added to the log event. The timestamp is correct in the alert's search table and also being added to the Email message correctly. However, it does not show up in the Log Event.   | eval event_timestamp==strftime(_time,"%Y-%m-%dT%H:%M:%S") | table event_timestamp   Log Event - [Event input]:   ... event_timestamp=$result.event_timestamp$ ...   Send Email action - [Message input]:   ... Event Timestamp: $result.event_timestamp$ Priority: XYZ ...   I have also noticed that if I put the timestamp before other fields in the 'Log Event' action, then those fields are also missing in the log. Any ideas why Log Event isn't working when adding a timestamp to it?
Hi,  I have a report that pulls daily transaction counts from a summary index.  Running the report for "month to date", I don't get results from every day.   My search is this:    index=summary se... See more...
Hi,  I have a report that pulls daily transaction counts from a summary index.  Running the report for "month to date", I don't get results from every day.   My search is this:    index=summary search_name=Summarization_Daily_Txn App IN ("XXX")) endpoint="ZZZZ" | bin _time span=1d | stats sum(Count) AS Txn_Count by _time | addcoltotals     Gives me this output: The Dec 2, 4th and 5th totals are missing.    Yes,  I have verified that there are counts for those days.  If I run the report so that it spans just Dec 4th and 5th, the counts show up.   Just not if I run it using earliest=@mon latest=@d Any ideas on what I am doing wrong?
Hi, I would have this need, that is to carry out a search that extracts all users who use iphone with SO = 9. * and then through the extracted users, search through them who has also used another dev... See more...
Hi, I would have this need, that is to carry out a search that extracts all users who use iphone with SO = 9. * and then through the extracted users, search through them who has also used another device. One solution would be to run the first search, get the list of all users and then do a new search with UsersId in input. First search: search model = iphone so = 9. * | table UserId Second search: search UserId IN (user list of the first search) model! = iphone Would it be possible to do this extraction with just one search? Thanks
Hi,  After upgrading to Splunk version 8.2.3 a few weeks ago, it was suddenly possible to remove/add a graph in a plot by clicking on the legend. Now, this feature is not working. Has this feature b... See more...
Hi,  After upgrading to Splunk version 8.2.3 a few weeks ago, it was suddenly possible to remove/add a graph in a plot by clicking on the legend. Now, this feature is not working. Has this feature been removed? I cannot find any documentation on this feature at all.  Appreciate any help!