All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team - We currently use Elastic to perform log storage and alerting, but we are in the process of converting to Splunk. Currently we have some Elastic alerting that runs every five minutes, and ... See more...
Hi team - We currently use Elastic to perform log storage and alerting, but we are in the process of converting to Splunk. Currently we have some Elastic alerting that runs every five minutes, and looks for the number of calls to a specific Apigee service. It works out how many calls were made in each 1 second interval, and alerts if the traffic in one or more intervals is above a threshold. Is it possible to do the same in Splunk? Run a query on hits in the last 5 minutes, sort it to provide a count for each 1 second interval, and work out the highest count value?
I have two savedsearches savedsearch1: | basesearch | stats count by _time, LocationId savedsearch2: | basesearch | count by _time, LocationId I want to track monitoring LocationIds based on be... See more...
I have two savedsearches savedsearch1: | basesearch | stats count by _time, LocationId savedsearch2: | basesearch | count by _time, LocationId I want to track monitoring LocationIds based on below criteria 1) LocationIds which are present in savedsearch2 but not in savedsearch1 2) LocationId if present in both reports, include those LocationIds with savedsearch1 timestamp>savedsearch2 timestamp, otherwise exclude it I could get LocationIds which are only present in savedsearch2 using below query, but not able to make time comparison ################################################## | savedsearch "savedsearch1" | eval flag="match" | append maxtime=1800 timeout=1800 [ savedsearch "savedsearch2" | eval flag="metric"] | stats values(flag) as flag by LocationId | where flag="metric" and flag!="match" | table LocationId ################################################## Any help would be appreciated!
Hi, In the old XML dashboards we used to have the "x" to close the submit buttons of inputs: Whereas in Dashboard studio there isn't. Does anybody know if the button can be hidden and the ... See more...
Hi, In the old XML dashboards we used to have the "x" to close the submit buttons of inputs: Whereas in Dashboard studio there isn't. Does anybody know if the button can be hidden and the dashboard made so that the default inputs are automatically executed, without hitting submit? Thanks a lot!
Hello Splunkers !! I have attached below two screenshot related to skip searches. As per the below graph many times we have high number of skip searches. When I validated those I seen that workload... See more...
Hello Splunkers !! I have attached below two screenshot related to skip searches. As per the below graph many times we have high number of skip searches. When I validated those I seen that workload_pool are not assigned to many saved searched ( attached in second screenshot ). My thought here : Because If so many searches are triggering on the same time and there is no workload_pool setting assigned then it will impact in the search performance and increase the value of skip ratio. Please let me know I am thinking on a right way ? If not please guide me or suggest me some good workarounds. I know there many blogs available on this. But please do share , if any specific suggestion on this.
Hello, My Splunk Enterprise is no longer syncing Tenable Data. Please help. Thank you.
Hello, Where do I find information on how to troubleshoot the below error: 2022-12-05 15:21:53,383+0000 INFO pid=299674 tid=MainThread file=threatmatch.py:run:404 | status="This modular input does... See more...
Hello, Where do I find information on how to troubleshoot the below error: 2022-12-05 15:21:53,383+0000 INFO pid=299674 tid=MainThread file=threatmatch.py:run:404 | status="This modular input does not execute on a search head cluster member" msg="will_execute="False" config="SHC" msg="Deselected based on SHC master selection algorithm." master_host="None" use_alpha="None" exclude_master="None"" The sourcetype is in the _internal index and the sourcetype=threatintel:threatmatch I have a hard time finding documentation that points me to a solution.
Hi Splunkers, I use many alerts where the result contains the username. Then a map search looks for this user, in the user list index, checks the group memberships, and will send the alert to the c... See more...
Hi Splunkers, I use many alerts where the result contains the username. Then a map search looks for this user, in the user list index, checks the group memberships, and will send the alert to the corresponding IT department. (there are many countries and there is a lookup that tells the support email by the user's country group). If the user is not a member of any country-group support email eval to the central one. That is working fine... until the user is in the user's index. If the user cannot be found there, the whole search is not working. Example: index=logons action=failure | stats dc(action) as failures by username | where failures > 20 | map maxsearches=50 search=" search index=users user=\"$username$\" | spath memberOf{}.displayName output=groupName | eval username=\"$usernam$\", failures=\"$failures$\" | lookup support.csv group as groupName output support" | eval support = if(isnull(support) OR support="", "central@example.com", support) | table username, failures, support So if a user failed to log in more than 20 times the alert triggers and sends an email to support - assigned by the group membership, if there is no membership, it will send to central IT. If the user cannot be found in index=users for some reason, the alert will not trigger at all. I would like it if the alert triggers and send to central@example.com (since a not existing user, has no group ...) with the username from the base search included.
Hello, in previous XML-based dashboard approach in Splunk, I was able to hide charts, based on a token value with something like: <chart depends="$showTypeCharts$"> However, with the new JSON-base... See more...
Hello, in previous XML-based dashboard approach in Splunk, I was able to hide charts, based on a token value with something like: <chart depends="$showTypeCharts$"> However, with the new JSON-based dashboards, I'm not able to hide a visualization. How do I do that?
I am forwarding F5 logs from a syslog server, but I have an additional timestamp and host IP (log below with strike-through). I would like to remove these at index time. I am trying to accomplish thi... See more...
I am forwarding F5 logs from a syslog server, but I have an additional timestamp and host IP (log below with strike-through). I would like to remove these at index time. I am trying to accomplish this using SEDCMD. My Regex test is good and I've also used several iterations of regex to try and accomplish this. Any ideas on what I am doing wrong? Location: opt/splunk/etc/apps/search/local/props.conf [f5-apm] category = Network & Security pulldown_type = 1 SEDCMD-noheader = /s^\w+\s+\d+\s+\d+:\d+:\d+\s+\d+\.\d+\.\d+\.\d+\s+//g Dec 5 09:45:55 172.16.97.188 Dec 5 09:45:45 gg-f5-02.domain.org notice tmm1[24012]: 01490500:5: /dmz/VPNClient_access_policy:dmz:17709577: New session from client IP 54.244.52.193 (ST=Oregon/CC=US/C=NA) at VIP 172.16.253.152 Listener /dmz/apm_vpn_vs_https (Reputation=Unknown)
My search is not working. I want to get Hit per minutes like this But my search dont have any about that:
Hi All, I need your help to determine the details of issues which affect users while running SPL. The details may include errors, their respective SPL, date-timestamp of occurrence and any other in... See more...
Hi All, I need your help to determine the details of issues which affect users while running SPL. The details may include errors, their respective SPL, date-timestamp of occurrence and any other information that can be fetched and used to resolve those issues. So, far I have tried the below: - 1. Fetching the saved search name and their errors "index=_internal source=*scheduler.log search_type=scheduled |stats count BY savedsearch_name, reason" 2. Fetching list of errors for all saved searches "index=_internal source=*scheduler.log search_type=scheduled |stats count BY reason" Is there any other SPL that can be built and used to get more errors which are not covered by the above? For example, errors such as: - Scheduled searches with syntax errors Corrupted data And, how to fetch errors for SPLs which are executed by end users on ad-hoc basis? Additionally, it would be helpful if you could share the approach to determine which index fails the most over a period of time. Thank you
Hello, We are trying to build a dashboard for Incident SLA compliance. The data is ingested from JIRA. Tickets are created in JIRA, and Splunk retrieves the information frequently. At this point i... See more...
Hello, We are trying to build a dashboard for Incident SLA compliance. The data is ingested from JIRA. Tickets are created in JIRA, and Splunk retrieves the information frequently. At this point in time, the concerned fields for me are the Ticket Number and Creation Time. However, when an existing Ticket in JIRA is updated, the new values in Splunk are updated on the existing values. Hence, I lose the previously captured, in this case, I miss out on Creation time, and the same field is updated with New Time. How can I capture in the below format? Please advise. Ticket Number, Creation Time, Updated Time. -- Thanks, Siddarth
Hello, We have noticed that in Monitoring Console-> Indexing-> Indexes and Volumes -> Indexes and Volumes: Deployment Dashboard, the "Oldest Data Age (days)" value for a few indexes is extremely hig... See more...
Hello, We have noticed that in Monitoring Console-> Indexing-> Indexes and Volumes -> Indexes and Volumes: Deployment Dashboard, the "Oldest Data Age (days)" value for a few indexes is extremely high (e.g. 1959 days). Retention time for those indexes is 180 days (frozenTimePeriodInSecs = 15552000). We have checked the data and it really shows very old events (e.g. from 2017) although ignoreOlderthan = 14d parameter has been added to inputs.conf during data onboarding. We have already deleted the very old data (older than 400 days) with a "delete" command, but the Indexes and Volumes: Deployment Dashboard keeps showing very old values for the Anyone knows what else to check and how to solve this issue? BR, Justyna
Hello All, we have just found some pretty old catalogues and delta files in /opt/splunk/var/run/searchpeers on our indexers (3 indexers in the cluster, Splunk v. 8.2.9.). We have never deleted anyt... See more...
Hello All, we have just found some pretty old catalogues and delta files in /opt/splunk/var/run/searchpeers on our indexers (3 indexers in the cluster, Splunk v. 8.2.9.). We have never deleted anything from that location as it always seemed to be a bit risky for us. At the same time the catalogues were modified in 2017 and seem to contain some unneeded and outdated stuff from already decommissioned servers. Is that safe to delete old catalogues in /opt/splunk/var/run/searchpeers? If so, should splunkd be stopped prior to deletion or should splunkd rolling restart be initiated after deletion as this is indexer cluster? Any hints would be much appreciated! BR, Justyna
Hi all, I need to extract some fields for authentication events from different log types, here below some example: LOG1 : AddSenaoLog%Client-6:LINUX_device(00:00:00:00:00:00/1.1.1.1) joins WLAN... See more...
Hi all, I need to extract some fields for authentication events from different log types, here below some example: LOG1 : AddSenaoLog%Client-6:LINUX_device(00:00:00:00:00:00/1.1.1.1) joins WLAN(WIFI) from MY-WIFI-0000-INT(00:00:00:00:00:00) LOG2 : AddSenaoLog%Client-6:(00:00:00:00:00:00) joins WLAN(WIFI-CITYLIFE) from MY-WIFI-0000-INT(00:00:00:00:00:00) LOG3 %Client-6:LINUX_device(00:00:00:00:00:00/1.1.1.1) joins WLAN(WIFI-OSPITI) from MY-WIFI-0000-INT(00:00:00:00:00:00) LOG4 %Client-6:(00:00:00:00:00:00) joins WLAN(WIFI-OSPITI) from MY-WIFI-0000-INT(00:00:00:00:00:00) As you can see in some case (LOG2 and LOG4) in the first parenthesis I have only the MAC address, in other cases (LOG1 and LOG3) I have both the IP and the MAC address, so I need to extract this two information (or only the MAC if the IP is missig as for LOG2 and LOG4) when I have "joins" in the logs. Thanks in advance!
Hi Splunkers, Im having problems with the "EXTRACT" functions in props.conf. Im trying to extract the fields from a log that is formatted like this (values are changed for privacy reasons): ... See more...
Hi Splunkers, Im having problems with the "EXTRACT" functions in props.conf. Im trying to extract the fields from a log that is formatted like this (values are changed for privacy reasons): DateTime: 2022-12-05T08:00:37 InterchangeId: asdf12-asdf12-asdf12-asdf12-asdf12 DocumentId: Sender: foobar Receiver: barfoo MessageType: foo RequesterId: bar Status: Running Filename: file.json DateTime: 2022-12-05T08:00:37 InterchangeId: asdf12-asdf12-asdf12-asdf12-asdf12 DocumentId: Sender: foobar Receiver: barfoo MessageType: foo RequesterId: bar Status: Running Filename: file.json I uploaded this data into Splunk, and i wrote the regexes that extracts the value. This search works perfectly: index=* sourcetype=test-sourcetype | rex "InterchangeId:\s(?<InterchangeId>[^\n\r]+)" | rex "DocumentId:\s(?<DocumentId>[^\n\r]+)" | rex "Sender:\s(?<Sender>[^\n\r]+)" | rex "Receiver:\s(?<Receiver>[^\n\r]+)" | rex "MessageType:\s(?<MessageType>[^\n\r]+)" | rex "Status:\s(?<Status>[^\n\r]+)" | rex "Filename:\s(?<Filename>[^\n\r]+)" | rex "RequesterName:\s(?<RequesterName>[^\n\r]+)" However, when i try to implement this using the "EXTRACT" config in props.conf, it does not work: [test-sourcetype] EXTRACT-InterchangeId = InterchangeId:\s(?<InterchangeId>[^\n\r]+) EXTRACT-DocumentId = DocumentId:\s(?<DocumentId>[^\n\r]+) EXTRACT-Sender = Sender:\s(?<Sender>[^\n\r]+) EXTRACT-Receiver = Receiver:\s(?<Receiver>[^\n\r]+) EXTRACT-MessageType = MessageType:\s(?<MessageType>[^\n\r]+) EXTRACT-Status = Status:\s(?<Status>[^\n\r]+) EXTRACT-Filename = Filename:\s(?<Filename>[^\n\r]+) EXTRACT-RequesterName = RequesterName:\s(?<RequesterName>[^\n\r]+) I have used btool to verify this is picked up on the search head. I can also see this config in the GUI: "Settings" -> "fields" I have tried applying "KV_MODE = none" aswell, without any difference. And yes, this code is deployed to an app on a Search head, since its an search time extraction. I've tried with many different regex'es, to debug if that is the problem, but without any luck. Does anyone have any idea on what im doing wrong here?
I have to whitelist fields based on 2 columns in a lookup, but the second column has multiple values. So we have to whitelist based on the condition that the username and the destinations are in two... See more...
I have to whitelist fields based on 2 columns in a lookup, but the second column has multiple values. So we have to whitelist based on the condition that the username and the destinations are in two fields in the same event. In the event too, we have the field values(dest) so multiple destinations are in one cell. The condition is that the user with those destinations should be whitelisted. How can we achieve this?
I have a table with four columns - time, duration, clientip, query. Duration is a numeric field and I can plot a line chart using first two columns, however I also want to see the corresponding las... See more...
I have a table with four columns - time, duration, clientip, query. Duration is a numeric field and I can plot a line chart using first two columns, however I also want to see the corresponding last two columns in the tooltip, is this possible?
Hi, I'm new to Splunk and maybe I didn't follow the instructions right from a post 2 years ago I'm trying to figure out how to reset my login credentials for the Enterprise Admin Console. Can someo... See more...
Hi, I'm new to Splunk and maybe I didn't follow the instructions right from a post 2 years ago I'm trying to figure out how to reset my login credentials for the Enterprise Admin Console. Can someone possibly give me the correct updated solution? Regards,
Hi all, I would like to highlight each fields in the same column in blue. But I don't know how to configure it. Do any one have ideas? For numeric fields, currently, I set color as "range" and ... See more...
Hi all, I would like to highlight each fields in the same column in blue. But I don't know how to configure it. Do any one have ideas? For numeric fields, currently, I set color as "range" and set the range from minimum to maximum into one color to fulfill my expectation. But for those fields with letters, I don't have a good way to do. Thank you.