All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have extracted a new filed "proc_name" from source and added it to table command of existing query and i am generating an email alert which is not showing new filed "proc_name" value in email... See more...
Hi, I have extracted a new filed "proc_name" from source and added it to table command of existing query and i am generating an email alert which is not showing new filed "proc_name" value in email.   host=XXX index=YYY sourcetype=app_logs rc time_taken="*" | search RC>=8 | table client_ip, proc_name, proc_id, RC, Message   client_ip proc_name proc_id RC Message MsgIDLCPS0.   5030 7 Process 'UPROC' #50930 -   RC=7MsgIDLCPS0.
I am in the process of writing a Splunk script that is going to overwrite the contents of a lookup file using REST. However, the issue I am hitting is how to authenticate against the REST endpoint. ... See more...
I am in the process of writing a Splunk script that is going to overwrite the contents of a lookup file using REST. However, the issue I am hitting is how to authenticate against the REST endpoint. I am planning on having Splunk running the script ( probably through inputs.conf). It would every x hours and update the lookup using a python script that calls an outside source. I can successfully call the outside source and parse the data, however I am stuck on how to overwrite the lookup table via REST. All examples of REST calls show passing credentials. I dont want to hardcode any admin creds on the script itself. I found this article from splunk, but the REST section clearly shows they are passing creds. Are there any other ways to do this?  https://www.splunk.com/en_us/blog/tips-and-tricks/store-encrypted-secrets-in-a-splunk-app.html Any suggestions?
In our outputs.conf for our splunk forwarders we have two tcpout target groups ([tcpout:<target_group>]) . Both tcpout groups have multiple servers/are autolb'd. Our second tcpout group (remote splu... See more...
In our outputs.conf for our splunk forwarders we have two tcpout target groups ([tcpout:<target_group>]) . Both tcpout groups have multiple servers/are autolb'd. Our second tcpout group (remote splunk instance) became unavailable due to a network issue, which caused all of our splunk forwarder's local queues to fill up and block forwarding totally (both groups) as they were no longer able to forward data to the second group. I'm looking into solutions by using outputs.conf, namely the tcpout settings, maxQueueSize and dropEventsOnQueueFull - a combination of these seems like it will solve our problem, however on reading the documentation here (https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Outputsconf), under dropEventsOnQueueFull: * CAUTION: DO NOT SET THIS TO A POSITIVE INTEGER IF YOU ARE MONITORING FILES. I am monitoring files - so this seems like a deal breaker? Is somebody help me understand why we wouldn't want to configure this setting if we're monitoring files? Or should we simply set this to 0 (not a positive integer)? If there's an outage of the second tcpout group, we're fine with losing some data for that site if that's the price of keeping the forwarders continuing to report to our first tcpout group. Hope that makes sense! Thanks in advance!  
Hey there, pretty new to Splunk searching. I am trying to get a table created that will combine search results based on SerialNumber and split them into 3 columns but one row.  Currently:  `main_... See more...
Hey there, pretty new to Splunk searching. I am trying to get a table created that will combine search results based on SerialNumber and split them into 3 columns but one row.  Currently:  `main_index` SerialNumber IN (XXX-XXX-XXX) | search "DUKPT KEY #" OR "type=DUKPT" | rex "DUKPT (Key|KEY) #(?<slot>[0-9]): \[ Status: (?<Status>[A-Z_]+)" | rex "KSN:(?<Key>.+)\]" | eval slot = if(LIKE (ApplicationVersion,"6.%"), slot, slot -1) | eval Key = if(LIKE (ApplicationVersion,"6.%"), ("Slot #".slot.": KSN: ".ksn),if(Status="KEY_PRESENT","Slot #".slot.": KSN: ".Key,"Slot #".slot.": No Key Loaded")) | dedup slot SerialNumber | table SerialNumber Key | sort slot Result: Desired Result: SerialNumber, Slot0, Slot1, Slot2 XXX-XXX-XXX, No Key Loaded, No Key Loaded, No Key Loaded I've tried Transpose, Transaction (which merged the entries into one row, but I couldn't figure out how to split the entries into their own column/field)
Hi Guys,  I am trying to do a search and also at the same time drop certain information from showing up. As seen from the table below  , there is this user [ghjkl-hh123-wer56] that shows up.  ... See more...
Hi Guys,  I am trying to do a search and also at the same time drop certain information from showing up. As seen from the table below  , there is this user [ghjkl-hh123-wer56] that shows up.  Can I know what must I do from the search string such that usernames like the above no longer show up? Please advise. username hostname user1 host1 user2 host2 ghjkl-hh123-wer56 host3 ghjkl-hh123-wer56 host4 user3 host4 Hope this clarifies Thank You regards, Alex
The purpose of this topic is to create a home for legacy diagrams on how indexing works in Splunk, created by the legendary Splunk Support Engineer, Masa! Keep in mind the information and diagrams in... See more...
The purpose of this topic is to create a home for legacy diagrams on how indexing works in Splunk, created by the legendary Splunk Support Engineer, Masa! Keep in mind the information and diagrams in this topic have not been updated since Splunk Enterprise 7.2. These used to live on an old Splunk community Wiki resource page that has been or will be taken down in the future, but many users have expressed that these have been and still are helpful.  Happy learning!
I've been developing a dashboard that leverages a timeline viz but having a considerable time adding css/html to remove the text-overflow of the labels. I see it's implementing the text.timeline.labe... See more...
I've been developing a dashboard that leverages a timeline viz but having a considerable time adding css/html to remove the text-overflow of the labels. I see it's implementing the text.timeline.label class but I'm far from an expert in js or html for that matter....any insights that people have used to solve this?
I have been having issues trying to download the Splunk SDK to my .NET 5 project and wondering if there are compatibility issues?
Hi All, Does the recently announced security vulnerability CVE-2021-3422 also apply to HWFs and IF that might be receiving and/or cooking data? Thanks
Hi, I want eventgen (installed as an App) to continuously reply 3 events just replacing the timestamp       # etc/apps/my_app/local/eventgen.conf [host_perf_monitor.sample] mode = replay interva... See more...
Hi, I want eventgen (installed as an App) to continuously reply 3 events just replacing the timestamp       # etc/apps/my_app/local/eventgen.conf [host_perf_monitor.sample] mode = replay interval = 15 earliest = -15s latest = now outputMode = file fileName = /opt/tmp/host_cpu.log token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp token.0.replacement = %Y-%m-%d %H:%M:%S # etc/apps/my_app/samples/host_perf_monitor.sample 2022-03-24 01:01:10 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-24 01:01:12 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-24 01:01:14 host=linux_server_01 status=GOOD object=cpu_used_pct value=44         if the eventgen would run at 08:38:45 I would expect that the output will be     2022-03-25 08:38:41 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-25 08:38:43 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-25 08:38:45 host=linux_server_01 status=GOOD object=cpu_used_pct value=44     but it is (first and second event are at the same time)     2022-03-25 08:38:43 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-25 08:38:43 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-25 08:38:45 host=linux_server_01 status=GOOD object=cpu_used_pct value=44     Tried with other event "spreads" as well (for example on 01, 34, 45 seconds) and will still always get 1st and 2nd events with the same timestamp. Thanks  
  ITSI menus send the users to "suite_redirect" page, that also fails to load with shows "oops" for non admin users Usually after an ITSI Upgrade (observed on 4.9 and later), on a Search-Head cluster.
Since upgrading to ITSI 4.9, the app reverted to the free version "ITEssential Work" (ITE-W) and most premium features are gone. This is because the ITSI license is now mandatory, or that the license... See more...
Since upgrading to ITSI 4.9, the app reverted to the free version "ITEssential Work" (ITE-W) and most premium features are gone. This is because the ITSI license is now mandatory, or that the license-master was not properly upgraded.
I have a requirement where I need to make an API call and write the data to a lookup file that I can use locally. The API calls returns data in a CSV format.    Previously, I used the Ad-on build... See more...
I have a requirement where I need to make an API call and write the data to a lookup file that I can use locally. The API calls returns data in a CSV format.    Previously, I used the Ad-on builder to create a python script that would make make the API request and index this data. However, I have a new requirement to skip the index entirely and write to a local lookup on the search head. The Ad-on builder wont help as it only shows examples of how to write the data to an index.   Thank you!
I have a search I can compose using multiple appends and sub-searches to accomplish, but I assume there's an easier way I'm just not seeing, and hoping someone can help. (maybe using | chart?) Esse... See more...
I have a search I can compose using multiple appends and sub-searches to accomplish, but I assume there's an easier way I'm just not seeing, and hoping someone can help. (maybe using | chart?) Essentially, I have a set of user login data... username and login_event (successful, failed, account locked, etc...). I'd like to display a chart showing total events (by login_event) and distinctive count by username, which might look like below: login_event count successful 1600 failed 200 account locked 10 successful (distinct usernames) 1200 failed (distinct usernames) 50 account locked (distinct usernames) 9
Hello Team, I have deployed Istio based Application on Kubernetes. And I want to monitor the same in Splunk APM. The application has side cars injected and is accessible from browser. I am using B... See more...
Hello Team, I have deployed Istio based Application on Kubernetes. And I want to monitor the same in Splunk APM. The application has side cars injected and is accessible from browser. I am using Bookinfo Demo Application available on istio : https://istio.io/latest/docs/setup/getting-started/ Can you please guide how to configure Otel-agent so it reports istio App traces to SPLUNK APM ? Is Istio-mixer -adapter required (it shows deprecated in documentation)
Hi,   We are using splunk website Monitoring App in Splunk enterprise and we want to know if there is any option available to schedule the Maintenance window during the changes on Websites,to avo... See more...
Hi,   We are using splunk website Monitoring App in Splunk enterprise and we want to know if there is any option available to schedule the Maintenance window during the changes on Websites,to avoid alerts generated at that time.
Looking to measure heavy sources and track how much is getting indexed per day by source. the main problem is our Splunk admin team cannot give us access to the _internal index, so i cannot run the ... See more...
Looking to measure heavy sources and track how much is getting indexed per day by source. the main problem is our Splunk admin team cannot give us access to the _internal index, so i cannot run the standard  _internal metrics commands such as: index=_internal sourcetype=splunkd source=*metrics.log* group=per_source_thruput   Curious as to how accurate measuring actual log sizes with Splunk commands might be compared to _internal index stats. we dont need 100% accurate results just a ballpark estimate such as one source might be indexing 5-600Gbs per day or 1-1.5 Tb a day for example. Thinking of trying something like    index=aws-index sourcetype=someSource source="/some/source/file.log" | eval raw_len=len(_raw) | eval raw_len_kb = raw_len/1024 | eval raw_len_mb = raw_len/1024/1024 | eval raw_len_gb = raw_len/1024/1024/1024 | eval raw_len_tb = raw_len/1024/1024/1024/1024 | stats sum(raw_len_mb) as MB sum(raw_len_gb) as GB sum(raw_len_tb) as TB by source                
Hello! I have a dataset that I'd like to add a new field to where I can arbitrarily define the values with manual input without downloading and reuploading the data. I've tried editing the table but... See more...
Hello! I have a dataset that I'd like to add a new field to where I can arbitrarily define the values with manual input without downloading and reuploading the data. I've tried editing the table but it seems as though I can only enter a calculated value, some cacatenation of fields and values, or input the same value for every record. Any help is appreciated, thanks! example: original dataset OG Field 1 OG Field 2 OG Field 3 UUID timestamp value UUID timestamp value   new dataset OG Field 1 OG Field 2 OG Field 3 New Field UUID timestamp value I can input anything I want here like a comment on the record UUID timestamp value I can input something different here   I don't necessarily need to use tables so if there's another method of adding new fields to datasets from within Splunk I'm open to that as well.
I would really love to use  Campus Compliance Toolkit for NIST 800-171 But I have Splunk Cloud Enterprise. Splunkbase says version 1.0.2 works, but sadly Splunk support says it doesn't. Is there ... See more...
I would really love to use  Campus Compliance Toolkit for NIST 800-171 But I have Splunk Cloud Enterprise. Splunkbase says version 1.0.2 works, but sadly Splunk support says it doesn't. Is there any chance others like this tool and have made it work? Or is there an alternative NIST reporting app out there (that doesn't require an annual license fee)? Thank you for your feedback.
Hello, We have an app that passed the Cloud Vetting today, but I can't find it in Splunk Cloud in "Browse more apps" in order to install it. This is the app: https://splunkbase.splunk.com/app/6336/... See more...
Hello, We have an app that passed the Cloud Vetting today, but I can't find it in Splunk Cloud in "Browse more apps" in order to install it. This is the app: https://splunkbase.splunk.com/app/6336/ Do you know why? Thanks, Omer