All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a field(eventCode)  which has a code values, and few of them ends with certain alphabets , I want to extract only the eventCode which ends with E, F, V and display it separately under differen... See more...
I have a field(eventCode)  which has a code values, and few of them ends with certain alphabets , I want to extract only the eventCode which ends with E, F, V and display it separately under different fields/names(minor, major, medium). I tried with | where eventCode=*E, but this doesnot work.. Is there any other way to extract other than rex/regex. If not, can you please provide some input.  Exmaple : eventCode=xyxbxsndsndg-5-3000-E eventCode=aksjdjfdfvbrhgnvfmbfbc-54-3601-E eventCode=plgkdfdcmasjenfmdklv-61-2501-F eventCode= pojdksdjhmmmaskxjs-91-4501-V Result : Minor                                                              Major                                                                         xyxbxsndsndg-5-3000-E                                       plgkdfdcmasjenfmdklv-61-2501-F             aksjdjfdfvbrhgnvfmbfbc-54-3601-E
I have 2 Splunk SPLs ===================== index=computer_admin source=admin_priv sourcetype=prive:db account_name=admin earliest=-1d | fields comp_name,comp_role,account_name,local_gp,gp_name | ... See more...
I have 2 Splunk SPLs ===================== index=computer_admin source=admin_priv sourcetype=prive:db account_name=admin earliest=-1d | fields comp_name,comp_role,account_name,local_gp,gp_name | table comp_name,comp_role,account_name,local_gp,gp_name ===================== The comp_name fields has values such as ,  AAAAA, BBBBB,  CCCCC, AFSGSH, GFDFDF, IUYTE, HGFDJ, ZZZZZ, YYYYYY, IIIIII, EEEEEE Basically I am looking for all the comp_names that the admin is on and copying the list to use in another SPL  to get the comp owners. Second SPL : =================== index=computer_admin  source=emp_card_details  sourcetype="something:db" C_NAME IN (AAAAA, BBBBB,  CCCCC, AFSGSH, GFDFDF, IUYTE, HGFDJ, ZZZZZ, YYYYYY, IIIIII, EEEEEE) | eval arl=lower(C_NAME) | stats values(asset_owner) by arl =================== Can we use subsearch or any thing similar to get it done in on SPL ? Any assistance ?
Hello, I try to count and compare the max amount of used different devices each day by groups for a week with the maximal available resources. For each day I count a different amount of used device... See more...
Hello, I try to count and compare the max amount of used different devices each day by groups for a week with the maximal available resources. For each day I count a different amount of used devices per related group. For a week I want to determine the max. value for each group and compare this value with a predefined max available value. With a a query like this: <search> | timechart span=1d dc(devicename) by groupname                       <Last 7 days> I get a table like this _time             Group1      Group2    Group3 ... 7.1.2022       4                  8                 1 8.1.2022       2                  3                 0 9.1.2022       6                  2                 0 ... How I tried to calculate the max value of each column (Group) and compare it with a predefined value for the group? With timecharts I didn't success. timechart doesn't pass the the value through a next command?
I am trying to index a small CSV file with only 1 column (both with monitoring and manually ) is it impossible  ?   was able to index only after I added additional column  for monitoring I have... See more...
I am trying to index a small CSV file with only 1 column (both with monitoring and manually ) is it impossible  ?   was able to index only after I added additional column  for monitoring I have defined the below   inputs.conf    [monitor:///opt/mailboxes_not_created_empid/*.csv] disabled = 0 sourcetype = csv_current_time index = mailboxes_not_created_empid crcSalt = <SOURCE> initCrcLength = 512   the csv (comma separated ) file is    Employee_Number 141941 180536 189377  
1.Which firewall port is used for SPLUNK integration with EPM SaaS? 2.Any idea about the volume of events received in megabytes per day?  
Hi at all, this is a different question than usual: I received an eMail from Splunk Accreditations Team <admin@mindtickle.com> with the following subject: "Accreditation assessment module has moved... See more...
Hi at all, this is a different question than usual: I received an eMail from Splunk Accreditations Team <admin@mindtickle.com> with the following subject: "Accreditation assessment module has moved!". I didn't required any Accreditation assessment and I don't know this eMail address. Does anyone know this email? Ciao and thanks. Giuseppe
Hello everyone, I have a correlation search setup to detect Suricata IDS alerts of a specific severity and trigger a notable as response action to ES. I would like to know if there is a way to op... See more...
Hello everyone, I have a correlation search setup to detect Suricata IDS alerts of a specific severity and trigger a notable as response action to ES. I would like to know if there is a way to optimize my search and transform it into tstats one in order to optimize the speed and performance. My current search:       index=suricata sourcetype=suricata event_type=alert alert.severity=1         I have Datamodel "Intrusion Detected" populated with suricata logs (also accelerated). But I would like to know if I can take advantage of the acceleration and use a tstats command in my correlation search in order to save some resources. Thank you in advance. Regards, Chris  
Hello All, I am working on the installing and getting data In for SC4S(Splunk connect for Syslog). For installation I referred below link and the service is running without any error. https://s... See more...
Hello All, I am working on the installing and getting data In for SC4S(Splunk connect for Syslog). For installation I referred below link and the service is running without any error. https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/quickstart_guide/ But the problem am facing is testing the data in , as mentioned in document am trying to send data in UDP port, but some how its giving below error. Command am running: bash echo “Hello SC4S” > /dev/udp/<SC4S_ip>/514 Error am getting as below bash: /dev/udp/: Is a directory Note: Currently the test data send from UDP port is same as SC4S installed machine, as am doing POC on SC4S so only one machine am using for both. Please can any one help me on this, as am not finding any related question on this same.
Hello there, team! I'm hoping someone can assist me with this requirement or confirm whether a solution exists . I need to filter specific log types and construct "Real-time" dashboards from them.... See more...
Hello there, team! I'm hoping someone can assist me with this requirement or confirm whether a solution exists . I need to filter specific log types and construct "Real-time" dashboards from them. Is there a service that can assist me with this? The dashboard should be able to be viewed in real time and should be self-contained once set up. I'm hoping for the team's expertise to come through and present me with a solution as soon as feasible.
Hi Splunk team, I have a question when I search in Splunk console. I got an issue as below:  Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too ... See more...
Hi Splunk team, I have a question when I search in Splunk console. I got an issue as below:  Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. And I used an enterprise license. Does anyone have an idea about this case? appreciate it.  
Hi All, we have onboarded windows DHCP servers on splunk cloud by installing UFs on each server. DHCP server writes logs on local log file and universal forwarder sends logs direct to splunk cloud.... See more...
Hi All, we have onboarded windows DHCP servers on splunk cloud by installing UFs on each server. DHCP server writes logs on local log file and universal forwarder sends logs direct to splunk cloud. The problem is logs are getting ingested to splunk with varied time difference. See the screenshot below, first log generated at 00:38 and indexed at 5:38 exact 5 hours difference where second log generated at  19:58 but indexed at 00:59 which has exact 7 hours of difference in event time (_time) and time in raw event but indexed at 00:59 and _time has been picked 00:58.   Please help to understand what can be the problem.   Thanks, Bhaskar
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2012. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgra... See more...
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2012. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade to 2017 , is it compatible with splunk db connect or do they need to upgrade it to SQL 2019 ?  Provide any solutions/documents on this .
[ VERY URGENT ]   Hi all, Does anyone has knowledge about how to push symantec antivirus logs to splunk or pull logs from symantec antivirus.  step - by - step process to do it.  
Hi Team I am trying to push AWS cloudwatch logs to splunk using the log stream in splunk add for AWS, but could not able to push it and received the below message. Failure in describing cloudwatc... See more...
Hi Team I am trying to push AWS cloudwatch logs to splunk using the log stream in splunk add for AWS, but could not able to push it and received the below message. Failure in describing cloudwatch logs streams for log_group=/aws/lambda/**-cc-h*-**-routeupdater, error=Traceback Can someone please suggest how to fix it
Hi SMEs, i have quick query here. While searching DHCP logs i could see huge latency (indextime -time) for few events , rest all looks ok. sharing two consecutive event logs with minimal and max late... See more...
Hi SMEs, i have quick query here. While searching DHCP logs i could see huge latency (indextime -time) for few events , rest all looks ok. sharing two consecutive event logs with minimal and max latency reported. Any clue. Event collection is through UF here
Hi, Doing a poc on trail version of Splunk, I'm trying to integrate Splunk with BetterCloud so that event log data can be pushed to Splunk, but having the given bellow issue. Error message printe... See more...
Hi, Doing a poc on trail version of Splunk, I'm trying to integrate Splunk with BetterCloud so that event log data can be pushed to Splunk, but having the given bellow issue. Error message printed on BetterCloud:   Splunk API token status:     Any help will be appreciated . Thanks   
Hello.  I know there have been a few posts on this topic, but I've been messing with it most of the day and the other posts weren't able to help me reach a solution.  Hoping someone can provide some ... See more...
Hello.  I know there have been a few posts on this topic, but I've been messing with it most of the day and the other posts weren't able to help me reach a solution.  Hoping someone can provide some guidance. I'm looking to pull some aggregate information out of Splunk via API requests but wanted to pre-build the data set using a scheduled report in Splunk so that the API request will return faster just pulling the results of the last run vs running the search itself before returning results. In the UI I've created a report named test.  I've tried a few different schedules and it ran twice earlier today, but at the moment I have it on the cron schedule of 0 1 * * 4 (1 on Thursdays). Via the API I can fetch the saved report named test like this:   https://SPLUNKURL:8089/services/scheduled/views/test   but no matter what schedule I set or modify in the UI, the results always show    cron_schedule 0 6 * * 1 is_scheduled 0   with the same results when requesting   https://SPLUNKURL:8089/servicesNS/APP/search/scheduled/views/_ScheduledView__test   and when I try   https://SPLUNKURL:8089/services/scheduled/views/test/history   I simply receive    <response> <messages> <msg type="ERROR">Cannot find saved search with name '_ScheduledView__test'.</msg> </messages> </response>   even though I know it ran twice in the last day and I can see the results in the UI.  Similarly, I tried updating the schedule via the API with   curl -u user:password --request POST 'https://SPLUNKURL:8089/services/scheduled/views/test/reschedule/' --data schedule_time=2022-03-03T04:00:01Z   and I get the same result   <response> <messages> <msg type="ERROR">Cannot find saved search with name '_ScheduledView__test'.</msg> </messages> </response>    Am I missing something?  I see the scheduled view and it's scheduled in the UI but I can't figure out any way to see or access the schedule or history via the API.  Hoping someone can shed some light on things as it's not making sense to me at the moment.  Also if it's helpful I checked and I believe our Splunk server version is 6.6.7
I have two separate searches that provides me the same data field in two different fieldds. I want to identify the common items across these two.    search 1 :    `sample_source` earliest=-7d env... See more...
I have two separate searches that provides me the same data field in two different fieldds. I want to identify the common items across these two.    search 1 :    `sample_source` earliest=-7d env="test" msg="storage" type="running_services" data="*myservice*" | dedup info.unitId | table info.unitId   and search 2 :    `sample_source_2` value="etc" idea="random" earliest=-14d name="*myservice*" | dedup columns.serviceID | table columns.serviceID     I want to see the common items across these two tables. I looked at similar questions posted here, but they all start with index= , and sourcetype = , I do not know which ones from above maps to which to get index, I am new to splunk. Appreciate any help. Thanks!
My current splunk env is on 7.2.x. As part of Splunk 8.x upgrade, I am trying to upgrade below apps to dual compatible versions (for both 7.2.x and 8.x) first   Splunk Supporting Add-on for Active... See more...
My current splunk env is on 7.2.x. As part of Splunk 8.x upgrade, I am trying to upgrade below apps to dual compatible versions (for both 7.2.x and 8.x) first   Splunk Supporting Add-on for Active Directory - to version 3.0.1 Splunk App for Unix - to version 6.0.0   Though it is mentioned as both are compatible with 7.2.x to 8.2.x, I am getting below warnings in my Deployer. Can someone confirm if they have recently upgraded with same scenario as me and faced no issues? so that I can ignore the warning and push the apps to search heads.   Invalid key in stanza [ldapsearch] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 2: python.version (value: python3). Invalid key in stanza [ldapfetch] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 11: python.version (value: python3). Invalid key in stanza [ldapfilter] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 21: python.version (value: python3). Invalid key in stanza [ldapgroup] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 31: python.version (value: python3). Invalid key in stanza [ldaptestconnection] in /opt/splunk/etc/apps/SA-ldapsearch/default/commands.conf, line 41: python.version (value: python3). Invalid key in stanza [script://./bin/update_hosts.py] in /opt/splunk/etc/apps/splunk_app_for_nix/default/inputs.conf, line 2: python.version (value: python3). Invalid key in stanza [admin_external:unix_conf] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 6: python.version (value: python3). Invalid key in stanza [admin_external:alert_overlay] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 12: python.version (value: python3). Invalid key in stanza [admin_external:sc_headlines] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 22: python.version (value: python3). Invalid key in stanza [admin_external:unix_configured] in /opt/splunk/etc/apps/splunk_app_for_nix/default/restmap.conf, line 32: python.version (value: python3).  
I have configured Heavy Forwarder to collect and forward syslog data to our Splunk Indexers. We purposely don't wish to use syslog server for the log collection due to other reasons. Now we also hav... See more...
I have configured Heavy Forwarder to collect and forward syslog data to our Splunk Indexers. We purposely don't wish to use syslog server for the log collection due to other reasons. Now we also have a requirement to forward the syslog data to Azure log analytics. Unfortunately, with log analytics, we must use log analytics agent (which is very similar to Splunk UF) to collect logs locally on the HF and forward to log analytics. I haven't found a way to forward logs from HF to log analytics directly.  Hence, just wondering if someone can advise if its possible to configure HF to write logs locally, just exactly how syslog does (like rsyslog) ?