All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow ... See more...
Looking to change Navigation menu background color based on panel search criteria. Here idea is i don't want to go on each dashboard and see what all alerts are there. Just to see Green , Yellow and Red color change dynamically on Navigation menu then i will go to specific dashboard
Because of licensing reasons, I want to stop indexing these events (as they make up almost 50% of the index) index=cisco dest_port=53 So basically DNS requests. Is it possible for this specific i... See more...
Because of licensing reasons, I want to stop indexing these events (as they make up almost 50% of the index) index=cisco dest_port=53 So basically DNS requests. Is it possible for this specific index=cisco to stop indexing these logs where dest_port=53? I cant do it from the cisco firewall itself. I googled a bit and the consensus seems to be sending the logs to NULLQUEUE, and modify props.conf & transform.conf. But what I'm struggling with is where are these files? My Splunk architecture is 2 Search Heads in a cluster and 1 License Manager server. Where to modify these files? On both Search heads?
An Example: We have defined two malicious urls in the local_http_intel This triggers false positives in the Threat Activity of ES on the valid and safe domain of github.com How can we ... See more...
An Example: We have defined two malicious urls in the local_http_intel This triggers false positives in the Threat Activity of ES on the valid and safe domain of github.com How can we prevent / fix this?
As the titles suggests, we are planning on migrating our heavy forwarder to a separate VLAN. However this is the first time I've done anything like this, and I was wondering what things I need to con... See more...
As the titles suggests, we are planning on migrating our heavy forwarder to a separate VLAN. However this is the first time I've done anything like this, and I was wondering what things I need to consider. If anyone can help that would be great
Hello All,    I currently have 6 indexers. Three of them are being forwarded data from outside sources. And the other three were added much later. I have a SF 1 and RF 1 (I understand this is not... See more...
Hello All,    I currently have 6 indexers. Three of them are being forwarded data from outside sources. And the other three were added much later. I have a SF 1 and RF 1 (I understand this is not optimal, but due to space constraints that was the best I could do).  My main question is, why isn't a data rebalance rebalancing primary buckets? Even with a RF of 1, the three new indexers seem to only receive replicated buckets. Which is rather confusing.    I have tried using this: Rebalance the indexer cluster - Splunk Documentation curl -k -u admin:pass --request POST \ https://localhost:8089/services/cluster/manager/control/control/rebalance_primaries Nothing would occur after that.   
  I have a field 'JOB_STATUS' with the values as 'STARTING' and 'SUCCESS'.  With this I have to calculate the runtime. runtime=STARTING-SUCCESS Can you please let me know how to do this
Hello all, I have a problem with duplicated rule name in Incident Review multiselect box. In Setting -> searches.. I have only 1 search. In Content management only 1 too. I also checked correlations... See more...
Hello all, I have a problem with duplicated rule name in Incident Review multiselect box. In Setting -> searches.. I have only 1 search. In Content management only 1 too. I also checked correlationsearches_lookup for dublicates. How does this multi select work(from what lookup or search it takes data). And how I can fix my problem with duplicated names
Hello  what is the expected log size for FMC log ingestion ? For example in 180 days retention  I am using Splunk for security operations and wanted to know what kind of logs are relevant to th... See more...
Hello  what is the expected log size for FMC log ingestion ? For example in 180 days retention  I am using Splunk for security operations and wanted to know what kind of logs are relevant to this purpose?   Thank you, Adrian
Hello community I am trying to set up a search to catch any succesfull logon after x failed within y minutes. However, I am strugling to see how I would build this search. Searching for succesful e... See more...
Hello community I am trying to set up a search to catch any succesfull logon after x failed within y minutes. However, I am strugling to see how I would build this search. Searching for succesful events is easy, index=<index> status="logged in" As well is finding unsuccesful events index=<index> message="Invalid credentials." status="nog logged"  I figured I could do a count by IP-address and/or username for the failed events, but how do I connect the two and add time? I am assuming this should be some combination of "and"/eval/if and where? Just to get a sense of what I am thinking: index=<index> ... ip-adress WHERE status="logged in" AND (index=<index> message="Invalid credentials." status="nog logged" >3 WHERE delta_time<10min) What I would like is an output with any ip-adress where any successful logon was perceeded by say 3 failed logons within 10 minutes. I am assuming this will be a large and complex search, at least for me, so any suggestions would really be appreciated. Best regards
I am getting the output time but i want to round the  time value for next 10th minute. the excepted output is the rounded_time. can anyone please guide me how to write a query for this ... See more...
I am getting the output time but i want to round the  time value for next 10th minute. the excepted output is the rounded_time. can anyone please guide me how to write a query for this File time rounded time 07/19/2022 12:16:48.303 07/19/2022 12:20:00.000 07/19/2022 12:11:36.660 07/19/2022 12:20:00.000 07/19/2022 09:33:48.091 07/19/2022 09:40:00.000 07/19/2022 00:30:24.749 07/19/2022 00:40:00.000
Hi Splunkers,  I have a question related to a json file that I'm trying to parse.I want to remove the first part of it until {"kind"), see sample file is added below.  I tried using the FIELD_REGEX... See more...
Hi Splunkers,  I have a question related to a json file that I'm trying to parse.I want to remove the first part of it until {"kind"), see sample file is added below.  I tried using the FIELD_REGEX_HEADER in props.conf which I think is supposed to that so far I've tried an failed with the following: FIELD_HEADER_REGEX={"activities":\s\[(.) FIELD_HEADER_REGEX={"activities":\s\[ FIELD_HEADER_REGEX={"activities": FIELD_HEADER_REGEX=\{\"activities\"\: Some of the above work on regexr.com with the sample data.  {"activities": [{"kind": "admin#reports#activity", "id": {"time": "2022-07-18T14:04:19.866Z", "uniqueQualifier": "-2451221827967636314", "applicationName": "redacted", "customerId": "redacted"}, "etag": "\"dng2uCItaXPqmMj2MG4RUqVkRjnE_4kf0VvQ0_WkiTg/6j3Reg7FneLgLDfjE-lZuZUOrdc\"", "actor": {"callerType": "USER", "email": "redacted", "profileId": "redacted"}, "ipAddress": "redacted", "events": [{"type": "SECURITY_INVESTIGATION", "name": "SECURITY_INVESTIGATION_QUERY", "parameters": [{"name": "INVESTIGATION_DATA_SOURCE", "value": "USER LOG EVENTS"}, {"name": "INVESTIGATION_QUERY", "value": "(empty)"}]}]}, Any help is appreciated thank you!
Hi all, I have a use case where i need to check for duplicate JIRA contents Basically, we are ingesting our JIRA into SOAR as containers (SOAR containers) In a particular playbook, i would like to que... See more...
Hi all, I have a use case where i need to check for duplicate JIRA contents Basically, we are ingesting our JIRA into SOAR as containers (SOAR containers) In a particular playbook, i would like to query the other SOAR containers with similar labels and based on these information do some processing. is there anyway i can achieve this in SOAR?
Hello Experts, I am stuck with a timechart % query and I want to sort basis a field count and not the default sort on alphabetical order it is counting There are two queries, it be best if I can... See more...
Hello Experts, I am stuck with a timechart % query and I want to sort basis a field count and not the default sort on alphabetical order it is counting There are two queries, it be best if I can get a help or workaround in both the one   Query - 1 index=xyz catcode="*" (prodid="1") (prodcat="*") success="*" | eval TheError=if(success="false" AND Error like "%%",count,0) | timechart span="15m" eval(round(sum(TheError)*100/sum(count),2)) by catcode useother=f In above query I want to find an option to sort it by catcode and not the default in alphabetical order   OR   Query 2 index= xyz (prodid="1")  (prodcat=*) (catcode=*) success=* | timechart span=1w sum(count) by catcode limit=10 useother=f usenull=f | untable _time catcode count | eventstats sum(count) as Total by _time | eval Fail_Percent=round(count*100/Total,2) | table _time, catcode, Fail_Percent | xyseries _time catcode Fail_Percent | sort -catcode In above query all is fine but I dont want 'eventstats count as Total' as it counts all events. I want to have this counted as Total by catcode and then calculate the % Can you help please.   Thanks in advance Nishant
Hi,  I habe a table after using stats: | stats values(durationSum) as duration by Fauf Station. How can I convert it to a table with only one line in such a format: Fauf duration_Station1 durat... See more...
Hi,  I habe a table after using stats: | stats values(durationSum) as duration by Fauf Station. How can I convert it to a table with only one line in such a format: Fauf duration_Station1 duration_Station2, duration_Station7, duration_Station10 Thanks for helping in advance!    
    index="main" source="all_digikala1.csv" | table title price | map search="search index=main source=all_sites1.csv | eval title_m=$title$,price_m=$price$ | table title_m price_m title price stor... See more...
    index="main" source="all_digikala1.csv" | table title price | map search="search index=main source=all_sites1.csv | eval title_m=$title$,price_m=$price$ | table title_m price_m title price store " maxsearches=99999999 | similarity textfield=title_m comparefield=title algo=MASI limit=200000000 | sort limit=0 -similarity | where similarity > 0.2 | table title_m title similarity store        I run the above code in Splunk   but it returns the following error Error in 'similarity' command: External search command exited unexpectedly with non-zero error code 9   The map part works correctly and returns a result of about 15 million, but the similarity part has a problem with this number, because when I reduce the output number of the map below 14 million, the code works correctly and the result is correct. Of course, this has nothing to do with the similarity command, because for example, when I use the jellyfisher command instead of similarity, the same error occurs again.   similarity command is related to "nlp text analytics", which has been added to Splunk
Hi Splunkers, I'm working on a dashboard panel where I have to show the average count of events for the users. This should be dynamic means it should give the average based on the time I select. ... See more...
Hi Splunkers, I'm working on a dashboard panel where I have to show the average count of events for the users. This should be dynamic means it should give the average based on the time I select. Example: users      count a                 9 b                 3 c                 5 d                 2 If I select time frame 7 days and user "a" from inputs, The panel should show the avg of count of events for the 7 days of the user a. Please help me to achieve this TIA.
Hello everyone, I have a csv file which shows me the power status of the server i.e if the server is powered on or off. I want to make a table with powered on as individual row and powered off as ano... See more...
Hello everyone, I have a csv file which shows me the power status of the server i.e if the server is powered on or off. I want to make a table with powered on as individual row and powered off as another individual row and show the total no of powered on servers and powered off servers as count
Hi Team, I have time in below two  formats and I want to convert them to minutes. How can I do this Format 1 1 Hour 10 Hours 47 Minutes 1 Day 5 Hours 15 Minutes 45 Minutes Format 2 ... See more...
Hi Team, I have time in below two  formats and I want to convert them to minutes. How can I do this Format 1 1 Hour 10 Hours 47 Minutes 1 Day 5 Hours 15 Minutes 45 Minutes Format 2  00:00:00 00:09:00 22:30:00
Hello Members, I have a basic question - I am not sure how to get data into splunk, into a custom index, use a source type, and then exrract fields. I have the add-0n installed for Cisco network de... See more...
Hello Members, I have a basic question - I am not sure how to get data into splunk, into a custom index, use a source type, and then exrract fields. I have the add-0n installed for Cisco network devices, but not sure it is the correct app to use for my case. I have a remote syslog server (running rsyslog) that builds log files for cisco switches and routers. I have a universal forwarder installed on the syslog server, it forwards data to splunk IF I configure it correctly.  I have tried configuring the Splunk receiver two ways: one using the "Forwarding and receiving" option from the "DATA" area - this works - but only allows showing data from the host sending the log info. And uses only 1 port, I am using 9997. I have not seen how to set a data source or source type for the incoming data.   The second way seems to be using the "Data Inputs" part of the "DATA" area.  This seems to not be possible, as the data is coming from a Universaly forwarder not a Splunk Enterprise configured as a forwarder.   How can I assign a source type and index to the data that does come in from the host that is configured with port 997 as a receiver?  Sorry for such a confusing question, Regards, eholz1    
Is there an SPL query to know the last date  UFs phoned in to a specific DS. We've many DS in our company