All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to find a native solution in order to monitor the execution of a Phantom Playbook. In case one of the actions fail, or a specific message/data is returned by a custom function, d... See more...
Hello, I am trying to find a native solution in order to monitor the execution of a Phantom Playbook. In case one of the actions fail, or a specific message/data is returned by a custom function, does anyone a possibility to make a general/native configuration, so that an admin will receive an instant email message with the error/playbook that ran/ etc? I am aware of the api 'error' and 'discontinue' methods, but it will mean to add this kind of checks at each step of the playbook ... Greatly appreciate your ideas!
I am upgrading a 6.6.X Splunk Enterprise and following the upgrade manual, I have to upgrade it to version 7.2.x first but... it wasn't listed on the older version download page, and I can't find any... See more...
I am upgrading a 6.6.X Splunk Enterprise and following the upgrade manual, I have to upgrade it to version 7.2.x first but... it wasn't listed on the older version download page, and I can't find anything about upgrading without downloading it first.  I'm in need of .rpm, .deb and .tgz. Where can I download Splunk Enterprise 7.x?  I was able to download Splunk Enterprise 8.x and Enterprise 9.x. Thanks for any info! Regards, Sobo
I am unable to install apps on SPLUNK, even though my username and password are correct. I tried with both my solunk.com credentials and also with my Splunk enterprise credentials. I tried clearing c... See more...
I am unable to install apps on SPLUNK, even though my username and password are correct. I tried with both my solunk.com credentials and also with my Splunk enterprise credentials. I tried clearing cookies, logging out and also tried changing my password and logging in.  May I please know if there is any other method to solve this issue.
Hi Community, I have a really strange issue and I'm wondering if this is not affecting quite everyone who is installing Splunkbase apps: It happens even on a clean and freshly started Splunk:   d... See more...
Hi Community, I have a really strange issue and I'm wondering if this is not affecting quite everyone who is installing Splunkbase apps: It happens even on a clean and freshly started Splunk:   docker run -ti --rm -e SPLUNK_START_ARGS="--accept-license" -e SPLUNK_PASSWORD="<redacted>" -p 8000:8000 splunk/splunk:latest   When I log in and want to install an App from Splunkbase: Manage Apps -> Browse more apps -> Select ANY App -> Login to Splunkbase, Agree and Install I get the error message:   Invalid app contents: archive contains more than one immediate subdirectory: and timeline_app   As I said this happens with any app. Downloading the file from Splunkbase and installing it either in the UI or the command line yields the same error. Do you have any idea? Can you reproduce this behaviour? I don't know what could be special on my configuration.
Hi everyone, Our team is currently looking for a way to make several dashboards be accessible to our business users who may not have a Splunk account. Specifically by giving them a Sharepoint link t... See more...
Hi everyone, Our team is currently looking for a way to make several dashboards be accessible to our business users who may not have a Splunk account. Specifically by giving them a Sharepoint link that will contain the links to each report. Upon checking with the our company's Splunk support team. They mentioned that it would not be possible and we would have to add those users manually via our ticketing system which would cause some delay that we would want to avoid. Just wanted to check if there is be a way to do this within the dashboard to bypass such ruling that they aren't familiar with? Let me know if there are anything you need to clarify. Please and thank you!
Hello team, I have problems with configuring Splunk with keycloak by SAML, every time it shows me an invalid request. Is it possible to propose to me the track how I can make this configuration. Best... See more...
Hello team, I have problems with configuring Splunk with keycloak by SAML, every time it shows me an invalid request. Is it possible to propose to me the track how I can make this configuration. Best regards.
Trying to find out if I can go directly from Splunk Enterprise 8.2.6 to Splunk Enterprise 9.0
Hello, In our environment we are dealing with hundreds of GB/day of logs coming from Firewalls. Despite having already fixed some noisy sources we are in difficulty to reduce the load. I was wo... See more...
Hello, In our environment we are dealing with hundreds of GB/day of logs coming from Firewalls. Despite having already fixed some noisy sources we are in difficulty to reduce the load. I was wondering if any of you have already tackled this problem. Our configuration is:     FW --> Load Balancer --> Syslog servers --> file --> Splunk HFs --> Splunk Indexer     The Splunk HFs are installed on the same servers where the syslog service is running. Syslog receives the data from Firewalls, write them into a file, then Splunk HF monitor those files. The idea is to use a component that every "n" minutes consolidate/summarize the information written into the file by the syslog server and produce an output with a summary. The summarized file is then read by Splunk HF.     FW --> Load Balancer --> Syslog servers --> file --> Summarization tool --> summarized file --> Splunk HFs --> Splunk Indexer     I can write a script for this use case, but do you know if there is already a tool that can do the job? I was checking logwatch, maybe you have a better suggestion.   Thansk a lot, Edoardo
How to disable deployment server temporarily ? After disable the DS temporarily, will the apps be deleted from client ? will there be any issues with the functionality  after enabling the DS ? and ... See more...
How to disable deployment server temporarily ? After disable the DS temporarily, will the apps be deleted from client ? will there be any issues with the functionality  after enabling the DS ? and can we again push the apps normally after re-enabling the DS ?
Hi All, I am using Splunk Cloud where I have an index whose retention period is set as 10 years, so I just want to understand how can I get an email alert once my index is close to its retention ... See more...
Hi All, I am using Splunk Cloud where I have an index whose retention period is set as 10 years, so I just want to understand how can I get an email alert once my index is close to its retention period .
Hi, I went to the search of my own app I created a extracted field using the wizard.  Once created, I go to Settings--> Fields extractions I can see the extracted field , type inline, assigned to m... See more...
Hi, I went to the search of my own app I created a extracted field using the wizard.  Once created, I go to Settings--> Fields extractions I can see the extracted field , type inline, assigned to my app , enabled and with permissions on the App for everyone read and write. then I go to my app once again and I perform a simple query in verbose mode. To be sure I also click on All fields to be sure that all fields are actually shown    index=cisco sourcetype="cisco:esa:amp"   Unfortunately the extracted field does not show on the list. any idea what I am missing? many thanks  
I am onboarding data from trend micro portable security via HEC. As per the documentation of trend micro it needs 5 indexes to be created at splunk end namely scanned log,detectedlog,assetinfo,update... See more...
I am onboarding data from trend micro portable security via HEC. As per the documentation of trend micro it needs 5 indexes to be created at splunk end namely scanned log,detectedlog,assetinfo,updateinfo,application info . We have created these indexes on the HF and used the following transforms to send it to a single index in the indexers. The transforms used is [trendmicro_routing] REGEX = . DEST_KEY = _MetaData:Index FORMAT = app_trendmicro also we have created different sourcetypes for each of the 5 categories of logs (scanned log,detectedlog,assetinfo,updateinfo,application info ). The transforms stanzas used are [tmps_scannedlogs] REGEX = scannedFiles\= FORMAT = sourcetype::tmps_scannedlogs DEST_KEY = MetaData:Sourcetype [tmps_detectedlogs] REGEX = threatType\= FORMAT = sourcetype::tmps_detectedlogs DEST_KEY = MetaData:Sourcetype [tmps_assetinfo] REGEX = physicalMemory\= FORMAT = sourcetype::tmps_assetinfo DEST_KEY = MetaData:Sourcetype [tmps_applicationinfo] REGEX = installPath\= FORMAT = sourcetype::tmps_applicationinfo DEST_KEY = MetaData:Sourcetype [tmps_updateinfo] REGEX = ^(?!.*(scannedFiles|threatType|physicalMemory|installPath)).* FORMAT = sourcetype::tmps_updateinfo DEST_KEY = MetaData:Sourcetype Now the scanned and detected logs have a different time format which is like -> startTime=Jun 13 2022 14:29:4 Asset info logs have a different  time format like -> systemDateAndTime=16062022 12:47:26 and rest of the log types (updateinfo,applicatioinfo) does not have a timestamp. And what i understand is we can not apply timestamp settings after routing it to different sourcetypes. How to make splunk  parse different timeformat and apply proper settings???
Hello Splunkers!! Can anyone please share your thoughts on whether we can monitor .accdb files via Splunk i.e. Integrating MS Access in Splunk.. can we use Splunk DB Connect? Thanks in Advance!! ... See more...
Hello Splunkers!! Can anyone please share your thoughts on whether we can monitor .accdb files via Splunk i.e. Integrating MS Access in Splunk.. can we use Splunk DB Connect? Thanks in Advance!!  - Sarah
Hi, I'm doing a project and I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. On forwarders exist a script that sen... See more...
Hi, I'm doing a project and I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. On forwarders exist a script that sends me logs of every processes that's running on server. I would to create a dynamic list where logs of processes is added and tagged as "Well-Knowned Processes".   After that when new logs of processes come to indexer they are compared with logs on dynamic list and if the process was not recognized (doesn't exist in the list) the alert is triggered. I would to do that to check suspicious process. Thanks   
Dear Sir or madam, we tried to install the application "Splunk DB Connect" from within splunk's "Browse more apps" interface. Since this installation failed with the message "", we tried the manua... See more...
Dear Sir or madam, we tried to install the application "Splunk DB Connect" from within splunk's "Browse more apps" interface. Since this installation failed with the message "", we tried the manual download of the app. As we went on to verify the downloads hash, we noticed that the hash of the download differs from the one presented after the download. Splunkbase - Website (https://splunkbase.splunk.com/app/2686/) SHA256 checksum (splunk-db-connect_390.tgz) 39ccb237a66ff15e71d7db6ed08b8fa03c33a6ebc0455ff47c3cbe6a2d164838 Lokal splunk-db-connect_390.tgt 47EE54FFF7B4017C97BBF29DEE31C049AB77882ABED4EC4D6FC40DA3F90DF243 (SHA265) The "Browse more Apps" - Interface of Splunk also mention that the App was updated about 7h ago. Kind regards, Frederik Brönner
Hi, I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. I want to create an alert for a specific use case. This ale... See more...
Hi, I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. I want to create an alert for a specific use case. This alert has to send a notify and run a script on the server where the alert was triggered. Is that possible?  Thanks
Hi, I need to join data on my 2 source A and B on the fields "Workitems_URL" and "Work Item URL"  In source B, there is field "Type Name" that all of its join match must have this field contain val... See more...
Hi, I need to join data on my 2 source A and B on the fields "Workitems_URL" and "Work Item URL"  In source B, there is field "Type Name" that all of its join match must have this field contain value "Default" for a valid match. If any of the matches do not have "Type Name" as "Default" then it will not be counted. I tried to use a filter "Type Name"="Default" for the source B but it does not seem to be correct. Please let me know how to make this work Thanks  
Hi All,  Below are 2 sets of raw events from my DDOS appliance.  The sets are separated based on the eventID field.  In the 1st set, there are 3 events for the eventID=7861430818955774485  while in t... See more...
Hi All,  Below are 2 sets of raw events from my DDOS appliance.  The sets are separated based on the eventID field.  In the 1st set, there are 3 events for the eventID=7861430818955774485  while in the 2nd set the eventID=7861430818955774999 has 2 events. Also, there is field called status which is a multi value field.  You will notice, in the 1st set the status has 3 values- starting, holding and end.   While in 2nd one, status has 2 values starting and hold.             Jun 20 13:58:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:57:38+08:00;eventID=7861430818955774485;status=starting;dstip=10.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=3450,bps=39006800;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:59:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:58:07+08:00;eventID=7861430818955774485;status=holding;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 14:00:07 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:59:07+08:00;eventID=7861430818955774485;status=end;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:58:05 64.x.x.x logtype=attackevent;datetime=2022-06-20 13:57:38+08:00;eventID=7861430818955774999;status=starting;dstip=10.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=3450,bps=39006800;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:59:05 64.x.x.x logtype=attackevent;datetime=2022-06-20 13:58:07+08:00;eventID=7861430818955774999;status=holding;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound;               My requirement is only to show (in table or stats) those eventIDs that do not have a status=end in their span.  Basically from the above raw events, the  search should filter out the 1st set completely and only show data about the 2nd set because there is no status=end for eventID=7861430818955774999 .  How to achieve this? I am using below search,  but it still ends up listing the 1st eventID also.           ...base search... | eval start=if(in(status, "starting"), _time, NULL), end =if(in(status,"holding"), _time, NULL) | stats earliest(start) as start_time , latest(end) as end_time, values(eventID) , values(status), values(eventType), values(severity), values(dstip), values(subtype) dc(status) as dc_status BY eventID | where dc_status > 2 | eval duration=tostring((end_time-start_time), "duration") | convert ctime(start_time)| convert ctime(end_time) | rename values(*) as *           If i put status!=end in my base search,  it still shows the other events where status=starting or holding for that eventID.   I want to eliminate the eventID completely  ( as in all events for that eventID) from my results which have status=end.  Any suggestions?
We have Prod and Non Prod environments, about 2 weeks ago we started to get this issue on our Non Prod environment. I have compared my outputs.conf files for all my HF forwarders and have found no is... See more...
We have Prod and Non Prod environments, about 2 weeks ago we started to get this issue on our Non Prod environment. I have compared my outputs.conf files for all my HF forwarders and have found no issues with it, telnet to the indexers from the HFs, certificates are in order.  I have also compared the Non Prod output.conf to my PROD outputs.conf files. After searching online I have not found anything about this.  Has anyone come across it before? Any help would be appreciated!  
Hi Community, Could any of you please let me know if there is any way or pre written app to connect Azure Sentinal with Splunk SOAR.    As for now, I am not able to see and find any app for this.  ... See more...
Hi Community, Could any of you please let me know if there is any way or pre written app to connect Azure Sentinal with Splunk SOAR.    As for now, I am not able to see and find any app for this.    Regards, Saurabh