All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, we have onboarded windows DHCP servers on splunk cloud by installing UFs on each server. DHCP server writes logs on local log file and universal forwarder sends logs direct to splunk cloud.... See more...
Hi All, we have onboarded windows DHCP servers on splunk cloud by installing UFs on each server. DHCP server writes logs on local log file and universal forwarder sends logs direct to splunk cloud. The problem is logs are getting ingested to splunk with varied time difference. See the screenshot below, first log generated at 00:38 and indexed at 5:38 exact 5 hours difference where second log generated at  19:58 but indexed at 00:59 which has exact 7 hours of difference in event time (_time) and time in raw event but indexed at 00:59 and _time has been picked 00:58.   Please help to understand what can be the problem.   Thanks, Bhaskar
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2012. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgra... See more...
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2012. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade to 2017 , is it compatible with splunk db connect or do they need to upgrade it to SQL 2019 ?  Provide any solutions/documents on this .
[ VERY URGENT ]   Hi all, Does anyone has knowledge about how to push symantec antivirus logs to splunk or pull logs from symantec antivirus.  step - by - step process to do it.  
Hi Team I am trying to push AWS cloudwatch logs to splunk using the log stream in splunk add for AWS, but could not able to push it and received the below message. Failure in describing cloudwatc... See more...
Hi Team I am trying to push AWS cloudwatch logs to splunk using the log stream in splunk add for AWS, but could not able to push it and received the below message. Failure in describing cloudwatch logs streams for log_group=/aws/lambda/**-cc-h*-**-routeupdater, error=Traceback Can someone please suggest how to fix it
Hi SMEs, i have quick query here. While searching DHCP logs i could see huge latency (indextime -time) for few events , rest all looks ok. sharing two consecutive event logs with minimal and max late... See more...
Hi SMEs, i have quick query here. While searching DHCP logs i could see huge latency (indextime -time) for few events , rest all looks ok. sharing two consecutive event logs with minimal and max latency reported. Any clue. Event collection is through UF here
Hi, Doing a poc on trail version of Splunk, I'm trying to integrate Splunk with BetterCloud so that event log data can be pushed to Splunk, but having the given bellow issue. Error message printe... See more...
Hi, Doing a poc on trail version of Splunk, I'm trying to integrate Splunk with BetterCloud so that event log data can be pushed to Splunk, but having the given bellow issue. Error message printed on BetterCloud:   Splunk API token status:     Any help will be appreciated . Thanks   
Hello.  I know there have been a few posts on this topic, but I've been messing with it most of the day and the other posts weren't able to help me reach a solution.  Hoping someone can provide some ... See more...
Hello.  I know there have been a few posts on this topic, but I've been messing with it most of the day and the other posts weren't able to help me reach a solution.  Hoping someone can provide some guidance. I'm looking to pull some aggregate information out of Splunk via API requests but wanted to pre-build the data set using a scheduled report in Splunk so that the API request will return faster just pulling the results of the last run vs running the search itself before returning results. In the UI I've created a report named test.  I've tried a few different schedules and it ran twice earlier today, but at the moment I have it on the cron schedule of 0 1 * * 4 (1 on Thursdays). Via the API I can fetch the saved report named test like this:   https://SPLUNKURL:8089/services/scheduled/views/test   but no matter what schedule I set or modify in the UI, the results always show    cron_schedule 0 6 * * 1 is_scheduled 0   with the same results when requesting   https://SPLUNKURL:8089/servicesNS/APP/search/scheduled/views/_ScheduledView__test   and when I try   https://SPLUNKURL:8089/services/scheduled/views/test/history   I simply receive    <response> <messages> <msg type="ERROR">Cannot find saved search with name '_ScheduledView__test'.</msg> </messages> </response>   even though I know it ran twice in the last day and I can see the results in the UI.  Similarly, I tried updating the schedule via the API with   curl -u user:password --request POST 'https://SPLUNKURL:8089/services/scheduled/views/test/reschedule/' --data schedule_time=2022-03-03T04:00:01Z   and I get the same result   <response> <messages> <msg type="ERROR">Cannot find saved search with name '_ScheduledView__test'.</msg> </messages> </response>    Am I missing something?  I see the scheduled view and it's scheduled in the UI but I can't figure out any way to see or access the schedule or history via the API.  Hoping someone can shed some light on things as it's not making sense to me at the moment.  Also if it's helpful I checked and I believe our Splunk server version is 6.6.7
I have two separate searches that provides me the same data field in two different fieldds. I want to identify the common items across these two.    search 1 :    `sample_source` earliest=-7d env... See more...
I have two separate searches that provides me the same data field in two different fieldds. I want to identify the common items across these two.    search 1 :    `sample_source` earliest=-7d env="test" msg="storage" type="running_services" data="*myservice*" | dedup info.unitId | table info.unitId   and search 2 :    `sample_source_2` value="etc" idea="random" earliest=-14d name="*myservice*" | dedup columns.serviceID | table columns.serviceID     I want to see the common items across these two tables. I looked at similar questions posted here, but they all start with index= , and sourcetype = , I do not know which ones from above maps to which to get index, I am new to splunk. Appreciate any help. Thanks!
My current splunk env is on 7.2.x. As part of Splunk 8.x upgrade, I am trying to upgrade below apps to dual compatible versions (for both 7.2.x and 8.x) first   Splunk Supporting Add-on for Active... See more...
My current splunk env is on 7.2.x. As part of Splunk 8.x upgrade, I am trying to upgrade below apps to dual compatible versions (for both 7.2.x and 8.x) first   Splunk Supporting Add-on for Active Directory - to version 3.0.1 Splunk App for Unix - to version 6.0.0   Though it is mentioned as both are compatible with 7.2.x to 8.2.x, I am getting below warnings in my Deployer. Can someone confirm if they have recently upgraded with same scenario as me and faced no issues? so that I can ignore the warning and push the apps to search heads.   Invalid key in stanza [ldapsearch] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 2: python.version (value: python3). Invalid key in stanza [ldapfetch] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 11: python.version (value: python3). Invalid key in stanza [ldapfilter] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 21: python.version (value: python3). Invalid key in stanza [ldapgroup] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 31: python.version (value: python3). Invalid key in stanza [ldaptestconnection] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 41: python.version (value: python3). Invalid key in stanza [script://./bin/update_hosts.py] in /opt/splunk/etc/apps/splunk_app_for_nix/default/inputs.conf, line 2: python.version (value: python3). Invalid key in stanza [admin_external:unix_conf] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 6: python.version (value: python3). Invalid key in stanza [admin_external:alert_overlay] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 12: python.version (value: python3). Invalid key in stanza [admin_external:sc_headlines] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 22: python.version (value: python3). Invalid key in stanza [admin_external:unix_configured] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 32: python.version (value: python3).  
I have configured Heavy Forwarder to collect and forward syslog data to our Splunk Indexers. We purposely don't wish to use syslog server for the log collection due to other reasons. Now we also hav... See more...
I have configured Heavy Forwarder to collect and forward syslog data to our Splunk Indexers. We purposely don't wish to use syslog server for the log collection due to other reasons. Now we also have a requirement to forward the syslog data to Azure log analytics. Unfortunately, with log analytics, we must use log analytics agent (which is very similar to Splunk UF) to collect logs locally on the HF and forward to log analytics. I haven't found a way to forward logs from HF to log analytics directly.  Hence, just wondering if someone can advise if its possible to configure HF to write logs locally, just exactly how syslog does (like rsyslog) ?
New to splunk and been struggling manipulating search results into a final result that I am looking for. In powershell where I'm familiar, I would just use a series of variables and return a final re... See more...
New to splunk and been struggling manipulating search results into a final result that I am looking for. In powershell where I'm familiar, I would just use a series of variables and return a final result set. I am trying to accomplish the below. (each target_name has multiple disk_group) 1) i need to find the latest Usable_Free_GB for each disk_group in each target_name and sum them 2) i need to find the latest Usable_Total_GB for each disk_group in each target_name and sum them I can get #1 and #2 in different searches, but am struggling to get them together to return a result set like this: Target_Name UsableSpaceFree TotalUsableSpace Target_Name1 123 456 Target_Name2 234 567   This is the closest I can get. But I need to only have 2 rows returned with all three fields populated    Once I can get the result set grouped by Target_Name, I then need to use eval to create a new field like the below using the values from #1 and #2   eval percent_free=round((UsableSpaceFree/TotalUsableSpace)*100,2)   Target_Name UsableSpaceFree TotalUsableSpace percent_free Target_Name1 123 456 ? Target_Name2 234 567 ?  
I have the following data, and I want a graph with the age as the x axis, and height as the y axis. name and value are fields pulled out of a "rex field=_raw" command. name value heigh... See more...
I have the following data, and I want a graph with the age as the x axis, and height as the y axis. name and value are fields pulled out of a "rex field=_raw" command. name value height age height age height  age height  age 100 1 2 105 3 107 4 108
The Splunk Trust and members of the community will be hosting open office hours for anybody who wanted to chat about anything Splunk related. Please visit the office_hours  channel in slack or dro... See more...
The Splunk Trust and members of the community will be hosting open office hours for anybody who wanted to chat about anything Splunk related. Please visit the office_hours  channel in slack or drop comments here if there is any topic you'd like to see discussed!
Hi Splunkers, I have to create an alert when there is a root user login in AWS. For this, I am ingesting cloudtrail logs to distributed splunk env. I want to add organization wide aws accounts to ge... See more...
Hi Splunkers, I have to create an alert when there is a root user login in AWS. For this, I am ingesting cloudtrail logs to distributed splunk env. I want to add organization wide aws accounts to get logs. Adding every single account and creds in Splunk add-on for AWS is difficult. Kindly suggest a way to onboard cloudtrail logs from multiple accounts. Thanks
Hi Splunkers, I would like to know what happens to logging in below scenarios when there is an outage. I would like to know if splunk restores logging when the systems are back from outage or does i... See more...
Hi Splunkers, I would like to know what happens to logging in below scenarios when there is an outage. I would like to know if splunk restores logging when the systems are back from outage or does it lose the logs. 1. logs are getting forwarded from an app 2.  synced from an S3 bucket 3. pulled via API 4. data coming through heavy forwarder Thanks
Hi I need to calculate the EPS averaged over a month, any ideas?
Is there a way we can authenticate to DUO MFA enabled Splunk using python API/SDK? Appreciate your help. 
Hi All, We just upgraded our HWF to version 8.2.5 and now when we start splunk we get this this message: "ERROR: Detected httpout stanza in outputs.conf , forwarding data over HTTP is only suppor... See more...
Hi All, We just upgraded our HWF to version 8.2.5 and now when we start splunk we get this this message: "ERROR: Detected httpout stanza in outputs.conf , forwarding data over HTTP is only supported on Universal Forwarders. For more information, see " This HWF is outputting all data to the HEC on another splunk instance and is working fine. What is the meaning of the message?  Is this function going to be deprecated?  Note the page it refers to says things like "Supported on Splunk universal forwarders only." It would be a real concern if this is to be deprecated - does anyone have any idea? Thanks, Keith
I have a script that sends effectively yum outputs to receivers/simple.  props.conf says [yumstuff] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Miscellaneous ... See more...
I have a script that sends effectively yum outputs to receivers/simple.  props.conf says [yumstuff] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Miscellaneous pulldown_type = 1 I expect the each post to be one event.  But some posts get broken into multiple events for unknown reasons.  My guess is that those posts are longer, although I couldn't find any applicable limit in limits.conf.  The broken ones are not all that long to start.  I examined one that was broken into three "events ".  Combined, they have 18543 chars, 271 lines.  The closest attribute in limits.conf I can find is maxchars, but that's for [kv] only, and the limit is already high: [kv] indexed_kv_limit = 1000 maxchars = 40960 The way it is broken also confuses me.  My post begins with a timestamp, followed by some bookkeeping kv pairs, then yum output.  If this breakage is caused by limits, I would expect the event containing the first part to be the biggest, to the extent it exceeds that limit.  But in general, the "event" corresponding to the end of the post is the biggest; even stranger, the middle "event" generally is extremely small containing only one line.  In the post I examined, for example, the first "event" contained 6710 chars, the second, 71 chars, the last, 11762 chars.  The breaking points are not special, either.  For example, 2022-02-09T19:51:28+00:00 ... ... ---> Package iwl6000g2b-firmware.noarch 0:18.168.6.1-79.el7 will be updated ---> Package iwl6000g2b-firmware.noarch 0:18.168.6.1-80.el7_9 will be an update <break> ---> Package iwl6050-firmware.noarch 0:41.28.5.1-79.el7 will be updated <break> ---> Package iwl6050-firmware.noarch 0:41.28.5.1-80.el7_9 will be an update ---> Package iwl7260-firmware.noarch 0:25.30.13.0-79.el7 will be updated ...   Where should I look?
Running Splunk 8.2 I have discovered that after completing a dashboard within Dashboard Studio, bundling it up to move from Splunk instance to another, the background (png/jpeg) that was attached to ... See more...
Running Splunk 8.2 I have discovered that after completing a dashboard within Dashboard Studio, bundling it up to move from Splunk instance to another, the background (png/jpeg) that was attached to the original dashboard is missing.  In the source code it does reference a KV store hash. The KV store has been "upgraded/updated" from the original environment that the dashboard was designed on to the current Splunk environment that the background seems to be missing. Possible solution would be to save the png within the app, however, I can't upload or reference that image in the app via the GUI given the Splunk host-separated environment (Search Head, Master, etc all are different virtual hosts and are not hosts I can simply click "browse" for within dashboard studio as that references the local box which isn't part of the Splunk environment).  And there doesn't seem to be good logic on how to change the source code to identify the png within the App.  I've seen other Splunk questions and the response right now is it's a possible "bug." Is there anyone trying to do something similar that has an effective work around?