All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a threat activity rule that looks at both internal IPs attempting communication externally to malicious IPs based on Threat Intelligence lookups/feed and vice-versa. However I would like to tu... See more...
I have a threat activity rule that looks at both internal IPs attempting communication externally to malicious IPs based on Threat Intelligence lookups/feed and vice-versa. However I would like to tune my search to filter out the events that have been blocked by the firewall or proxy and alert on the true positive and I'm not sure if I should tune the search itself or modify the Threat Intelligence data model.  Has anyone done or come across this before? Please help
Hello Splunkers,   I have a query where I did a  |stats values(abc) as abc command over time .I got the below results . I want the abc column to omit the consecutive same results .I basically want ... See more...
Hello Splunkers,   I have a query where I did a  |stats values(abc) as abc command over time .I got the below results . I want the abc column to omit the consecutive same results .I basically want to compare the results with the above one and show only which are different and count of it is different. You can see below 2022-04-07 12:41:17  and 2022-04-04 10:16:34 have the same "abc " value .I want to omit the second one and then compare with the 3rd result which is different so keep that and then compare the 3rd value with 4th and continue that way..Also show the one which have less count like 2022-03-07 11:48:46 .I tried dedup but it did not work..Any suggestions?   2022-04-07 12:41:17 1334821020002 1334821020007 1334821020011 1334821020024 1334821020027 1334821020043 1334821020053 1334821020075 2022-04-04 10:16:34 1334821020002 1334821020007 1334821020011 1334821020024 1334821020027 1334821020043 1334821020053 1334821020075 2022-03-22 07:52:24 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-22 07:36:36 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-18 06:31:18 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-14 13:11:15 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-09 06:42:36 1335221020082 1335221020268 1335221020282 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848 2022-03-07 11:48:46 1335221020591 1335221020597 1335221020619 1335221020721 1335221020848   Thanks in Advance
Greetings Splunk Community, I am currently working on a search and I am trying to drop rows that have "NULL" in them. The problem I am running into is that some of my rows with "NULL" have things l... See more...
Greetings Splunk Community, I am currently working on a search and I am trying to drop rows that have "NULL" in them. The problem I am running into is that some of my rows with "NULL" have things like "nullnullNULL" or "nullNULL".  Is there a way i can remove the any row that has the "NULL" value regardless of other info in it? Thanks in advance!
All, I'm using the SaaS controller.  I'm familiar with metric rollup but there is a difference in data granularity between the SaaS UI and the output from the metric-data REST endpoint. Currently I... See more...
All, I'm using the SaaS controller.  I'm familiar with metric rollup but there is a difference in data granularity between the SaaS UI and the output from the metric-data REST endpoint. Currently I'm looking at some data in the UI and it shows 1 minute intervals for a week ago.  However when I query the same approximate time range from the REST API, it's giving me 1 hour intervals.  Rollup is false.  I've tried adjusting the start time of my REST query to be < 1 week ago.  The time range of my query is 2 hours. Through trial and error, it looks like the API returns 1 hour granularity for times more than about 24 hours ago.  Somewhere between 12-24 hours ago, I get 10 minute granularity and 1 minute for less than that. Is there a way I can get the 1 minute granularity from REST.  Obviously the data exists if the UI is showing it to me. thanks
Hello, since 2018 our application has been logging to Azure Storage, in a single container, with "folders" broken down as: /Year2018/Month04/Day12/Hour15/Minute20/Subscription123/User/456/logtype.l... See more...
Hello, since 2018 our application has been logging to Azure Storage, in a single container, with "folders" broken down as: /Year2018/Month04/Day12/Hour15/Minute20/Subscription123/User/456/logtype.log   My goal is to pull these logs (json) into Splunk, and so I've set up the Add-On and begun ingesting data... but it kept stopping at 2018... never getting to 20(19/20/21/22).  Investigating why, after quite a bit of tinkering around, I found some internal logs that indicated  The number of blobs in container [containername] is 5000  Which... upon further research... is the maximum number of records returned without hitting a movenext marker because of forced paging with the API. So... I mean I can go edit the python script myself... but is there another way/better way to do this or is a fix for this already in the works?  And, if not and I make the change... is there a github or something I can submit the change to?
I have a dashboard setup that returns a few searches for my organization. When I click the export button underneath the search the export results box pops up. When I click export it opens up my file ... See more...
I have a dashboard setup that returns a few searches for my organization. When I click the export button underneath the search the export results box pops up. When I click export it opens up my file explorer to the last location I was at. However, I also have splunk setup on another network that always exports automatically to my downloads folder. So I wondering how to get my splunk to open my last file explorer when I go to export some results.
Does anyone know the list of messages and what they mean when running ./splunk check-integrity -bucketPath [ bucket path ] [ -verbose ] I searched all of Splunk, opened a ticket, spoke to professio... See more...
Does anyone know the list of messages and what they mean when running ./splunk check-integrity -bucketPath [ bucket path ] [ -verbose ] I searched all of Splunk, opened a ticket, spoke to professional services with no luck. Here are some examples of the messages: Hash corresponding to slice... Slices hash file (l1Hashes... Hash of l1Hashes... Journal has no hashes Hash files missing in the bucket  
Hello, Whats the major difference between splunk 8.2.4 and splunk 8.2.6?
We're running into an issue using Add-On for AWS + SQS-based S3 inputs to pull Aurora logs from S3 buckets. The .gz data in the buckets appears "double zipped" so even after Splunk extracts and index... See more...
We're running into an issue using Add-On for AWS + SQS-based S3 inputs to pull Aurora logs from S3 buckets. The .gz data in the buckets appears "double zipped" so even after Splunk extracts and indexes the data our search results are still zipped & look like this: ��G�!x��#�V�~������ ��D����;~-lǯ=�������D� AWS admins confirmed logs are zipped just once by Kinesis en route to S3 buckets (not a .zip containing groups of .gz), but even extracting data directly from journal.gz on indexers doesn't reveal plaintext logs. Any potential solutions/fixes for this?
I need to upgrade a Search Head Cluster from 7.3.4 to 8.1.9 and I have run the first two commands: splunk upgrade-init shcluster-members splunk edit shcluster-config -manual_detention on We are... See more...
I need to upgrade a Search Head Cluster from 7.3.4 to 8.1.9 and I have run the first two commands: splunk upgrade-init shcluster-members splunk edit shcluster-config -manual_detention on We are monitoring the active searches using the following command: splunk list shcluster-member-info | grep "active" And we see: active_historical_search_count:1 active_realtime_search_count:0 And it seemed to never reduce down to 0 for the active_historical_search_count, but after 90 minutes, it seems to have come down to 0. We checked the currently-running searches and found some new searches running on the detention server after 1 hour. We have the following set in the server.conf: decommission_force_finish_idle_time = 0 decommission_node_force_timeout = 300 decommission_search_jobs_wait_secs = 180 ...so why is it taking 90 minutes to stop running savedsearches? We did find some savedsearches that were running for long times and we fixed them, but should not all new searches be moved to another server once it is in manual detention? What can I do to fix this, so that my SHC can be upgraded?
Hey Team, I have some 150+ ip addresses in CIDR format (IE 96.24.0.0/16, etc) , i am getting my search result with one values coming as  dst_ip 96.24.123.123.  i need to filter out this event. so b... See more...
Hey Team, I have some 150+ ip addresses in CIDR format (IE 96.24.0.0/16, etc) , i am getting my search result with one values coming as  dst_ip 96.24.123.123.  i need to filter out this event. so basically if it would be one,, i can simply do in my SPL dst_ip!= (96.24.0.0/16) or NOT dst_ip IN ((96.24.0.0/16),  but i have around 150+ cidr that i need to filter out. i tried to add them into lookup file and it seems cidr in lookfile is not working. can someo
Hello, I have 2 CSVs in my splunk: Alert.csv having below columns and data: Alert_Header   Alert_type   Date JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022 JNA/athena_VICTORIA_Sta... See more...
Hello, I have 2 CSVs in my splunk: Alert.csv having below columns and data: Alert_Header   Alert_type   Date JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022 JNA/athena_VICTORIA_Staging [MAXRUN]   Autosys   01/03/2022 JNA/athena_MAIN_Load [FAILED]   Autosys   01/03/2022 JNA/athena_OLTP_Staging [MAXRUN]   Autosys   01/03/2022 NYP02000 has high_cpu   DATABASE   01/03/2022   Mapping.csv having below columns and data: Alert_Type   Inclusion   Exclusion   Header Autosys   athena   VICTORIA   ATHENA-Jobs Autosys   VICTORIA   NONE   VICTORIA-Jobs Database   high_cpu   NONE   CPU alerts   Output required: Alert_Header   Alert_type   Date   Header JNA/athena_VICTORIA_Load [FAILED]   Autosys   01/03/2022   VICTORIA-Jobs JNA/athena_VICTORIA_Staging [MAXRUN]   Autosys   01/03/2022   VICTORIA-Jobs JNA/athena_MAIN_Load [FAILED]   Autosys   01/03/2022   ATHENA-Jobs JNA/athena_OLTP_Staging [MAXRUN]   Autosys   01/03/2022   ATHENA-Jobs NYP02000 has high_cpu   DATABASE   01/03/2022   CPU alerts   Logic: Mapping file is for looking up pattern in the Alert.csv. So if any Alert_Header which has "athena"(mentioned in Inclusion) but doesn't have "Victoria" (mentioned in Exclusion) keyword in it will be termed as "ATHENA-Jobs" . Similarly if any Alert_Header which has "Victoria"(mentioned in Inclusion) only will be termed as "VICTORIA-Jobs". None in Exclusion column would mean there is no exclusion pattern to be searched in Alert_Header.   Can you please help with this query
As shown below I have only two events present on my index But when i execute the below search query index = **** |rex field=_raw "(?msi)(?<json_field>\{.+\}$)" | spath input=json_field |ren... See more...
As shown below I have only two events present on my index But when i execute the below search query index = **** |rex field=_raw "(?msi)(?<json_field>\{.+\}$)" | spath input=json_field |rename SCMSplunkLog.SCMFailureLog.appName as APPNAME,SCMSplunkLog.SCMFailureLog.eventType as EVENTTYPE,SCMSplunkLog.SCMFailureLog.payload.level as LEVEL,SCMSplunkLog.SCMFailureLog.payload.errorDescription as ERRORDESCRIPTION,SCMSplunkLog.SCMFailureLog.payload.startTime as STARTDATE,SCMSplunkLog.SCMFailureLog.payload.endTime as ENDDATE |where APPNAME!="" and LEVEL="ERROR"|table APPNAME,EVENTTYPE,STARTDATE,ENDDATE,LEVEL,ERRORDESCRIPTION I was getting duplicate entries on result table as below Can anyone please help me with this. Edited: Attached sample json: { "SCMSplunkLog" : { "SCMFailureLog" : { "appName" : "Testing_splunk_alerts_log", "eventType" : "Testing_splunk_alerts_log", "payload" : { "level" : "ERROR", "startTime" : "2022-04-12T13:57:49.156Z", "successCount" : 0, "failureCount" : 0, "publishedCount" : 0, "errorCode" : 0, "errorDescription" : "ERROR: relation \"test.testLand\" does not exist\n Position: 8", "sourceCount" : 0, "endTime" : "2022-04-12T13:57:54.483Z" } } } }  
Hello, I am new to splunk. I am trying to run a report to show what servers our users connect to and on what ports.  the report is going to be reviewed by non IT business unit so it will be nice if t... See more...
Hello, I am new to splunk. I am trying to run a report to show what servers our users connect to and on what ports.  the report is going to be reviewed by non IT business unit so it will be nice if the report can have a description to the ports (example: NetBIOS, SQL, LDAP, etc).  Is there a way to do that?  Thank you in advance!
Hi, We need to export login events from Windows and Linux servers from Splunk Cloud Platform to another system for further analysis. In on-prem deployments, we were able to use a forwarder to expor... See more...
Hi, We need to export login events from Windows and Linux servers from Splunk Cloud Platform to another system for further analysis. In on-prem deployments, we were able to use a forwarder to export syslog data over a TLS connection. Is this an option also in Splunk Cloud? We see there's an option to use a REST API to get data from Splunk Cloud, but is it practical when we are talking about a large amount of data, all the time? We need to get the data within a few seconds and we are talking about a large number of server, so not sure that polling with REST API is the way to go. Alternatively, are there other ways? Maybe cloud native ways like exporting to AWS CloudWatch or Kinesis streams? Thanks, Gabriel
Hello, I am using a scheduled report to fill a summary index. The report is supposed to work with indextime and process everything, that came new within the last hour. Therefore I configured the t... See more...
Hello, I am using a scheduled report to fill a summary index. The report is supposed to work with indextime and process everything, that came new within the last hour. Therefore I configured the timerange the following way: But somehow the scheduled report restricts the event time to the last 24 hours, which can yield to no search results in a case, that indexed data is from the day before yesterday: Does someone know, why this restriction is happening, although the event time is supposed to be unrestricted? Thanks and best regards
I am trying to create a dashboard which shows % availability over a set period of time. I am trying to calculate all calls - 5xx failures - 400 failures. However, I am not sure if 400 failures are ... See more...
I am trying to create a dashboard which shows % availability over a set period of time. I am trying to calculate all calls - 5xx failures - 400 failures. However, I am not sure if 400 failures are also being counted in the successful call line and if other 4xx failures are included in the fourHundredFail line. Is the below the correct way to calculate this? Thank you for your help! vhost="mainbrand" | eval successfulCall=if('httpstatus'=200 OR 'httpstatus'=201 OR 'httpstatus'=204 OR 'httpstatus'=401 OR 'httpstatus'=403 OR'httpstatus'=404 OR 'httpstatus'=422 OR 'httpstatus'=429,1,0) | eval fourHundredFail=if('httpstatus'=400, 1,0) | eval technicalFail=if(match(substr('httpstatus',1,1),"5") ,1,0) | eval totalSuccesfulCalls = successfulCall-fourHundredFail-technicalFail | stats sum(successfulCall) as "2xx_or_4xx_Calls" sum(fourHundredFail) as "400_Failures" sum(technicalFail) as "5xx_Failures" sum(totalSuccesfulCalls) as "Total_Successful_Calls" by vhost | eval percentageAvailability=(('Total_Successful_Calls'/'2xx_or_4xx_Calls')*100) | eval percentageAvailability=round('percentageAvailability', 2) | table vhost, "2xx_or_4xx_Calls","400_Failures", "5xx_Failures", "Total_Successful_Calls", percentageAvailability | appendpipe [stats avg(percentageAvailability) as averagePercentage] | eval averagePercentage=round('averagePercentage', 2) | sort by "percentageAvailability" asc
I have a fairly large(3,400 records) search result that randomly contains non-ascii characters in any one of the 20 fields. This normally is not an issue but the sendemail.py script that is used by s... See more...
I have a fairly large(3,400 records) search result that randomly contains non-ascii characters in any one of the 20 fields. This normally is not an issue but the sendemail.py script that is used by splunks email alerting system is erroring out because of it. Is there a way to remove non-ascii characters from all search results? See below for search example:     | inputlookup StatusReport.csv | fields Name ID BusinessGroup Class Email Issues Comments Dynamic Status Owner HoB | sort ID  
We're running Splunk 8.2.2 with the Microsoft Azure Add-on version 3.1.1.  We have the add-on installed on a heavy forwarder. Most of the time the logs pull fine but every 1-2 months they stop pull... See more...
We're running Splunk 8.2.2 with the Microsoft Azure Add-on version 3.1.1.  We have the add-on installed on a heavy forwarder. Most of the time the logs pull fine but every 1-2 months they stop pulling.  If we let it sit for ~8 hours it eventually starts working again.  If we disable then re-enable the pull it starts working right away.     What steps should we take next time this happens to troubleshoot why this is occurring? 
Hello everyone! I have in search results table like A=1, B=1, C=3 I have lookup like Type A B C server1 1 1 4 server2 1 1 5 server3 1 1 6 ... See more...
Hello everyone! I have in search results table like A=1, B=1, C=3 I have lookup like Type A B C server1 1 1 4 server2 1 1 5 server3 1 1 6   I need to make search, that will compare results from my search with lookup and if enough one value in appropriate column is equal, than column is true, if not - false. For example, A=1, server1, server2 and server3 = 1 in column A, i need result A=true. B - same. But in column C there is no "3" so C is false. Help me please.