All Topics

Top

All Topics

What is wrong with the query below, it does not return any value in the timestamp field. The attached image shows a result sample index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup co... See more...
What is wrong with the query below, it does not return any value in the timestamp field. The attached image shows a result sample index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup computer_meta.serial | eval timestamp = strptime(computerGeneral.lastEnrolledDate, "%Y-%m-%dT%H:%M:%S.%3QZ") | eval sixtyDaysAgo = relative_time(now(), "-60d") | table computer_meta.name, computerGeneral.lastEnrolledDate,timestamp, sixtyDaysAgo    
Hi,   most of the splunk forwarder installed on servers are on NT Authority and will like to change this to local admin. I have tried modifying the ansible roles to fix the this but havent been succ... See more...
Hi,   most of the splunk forwarder installed on servers are on NT Authority and will like to change this to local admin. I have tried modifying the ansible roles to fix the this but havent been successful any ideas on what can be done will be appreciated. 
Hi Everyone, Hope everyone is alright.  I have the below base search. I am trying to built an alert index=idx-cloud-azure "*09406b3b-b643-4e86-876e-4cd5f5a8be57*" metricName=MemoryPercentage OR me... See more...
Hi Everyone, Hope everyone is alright.  I have the below base search. I am trying to built an alert index=idx-cloud-azure "*09406b3b-b643-4e86-876e-4cd5f5a8be57*" metricName=MemoryPercentage OR metricName=CpuPercentage This is the below condition which I have to follow-  CPUPercentage > 85 MemoryPercentage > 85, where CPUPercentage and MemoryPercentage are values of a field called metricName. I am doing like this - index=idx-cloud-azure "*09406b3b-b643-4e86-876e-4cd5f5a8be57*" | eval metricCount=if((metricName="MemoryPercentage" OR metricName="CpuPercentage"),1,0) | stats count by metricCount | where MemoryPercentage > 85 OR CpuPercentage > 85 not sure if this is correct way to do. Could anyone pls suggest a better way. Thanks in advance
Hi community,   I have installed Splunk Add on for Salesforce on Heavy Forwarder and have been collecting data from Salesforce Object and Event Log. I've noticed that sfdc:logfile is huge and I do... See more...
Hi community,   I have installed Splunk Add on for Salesforce on Heavy Forwarder and have been collecting data from Salesforce Object and Event Log. I've noticed that sfdc:logfile is huge and I don't need all the records but from UI there is no why to filter out the collection. Is there a way where we can filter out the EVENT_TYPE? I need only events with EVENT_TYPE="LightningPageView" Any help is appreciated.   Thank you Marta
my json file contains total 2 stages as below. per_stage_info_vendor_data: [ [-]      { [-]        Stage: stage1        WallClockTime: 0h:30m:23s      }      { [-]        Stage: stage2       ... See more...
my json file contains total 2 stages as below. per_stage_info_vendor_data: [ [-]      { [-]        Stage: stage1        WallClockTime: 0h:30m:23s      }      { [-]        Stage: stage2         WallClockTime: 0h:52m:36s      }       ]   with following regular expression we are able to get the hours mins and seconds. rex field=per_stage_info_vendor_data{}.WallClockTime max_match=0 "((?<hours>\d+)h:(?<minutes>\d+)m:(?<seconds>\d+)s)" But when i tried |eval  stagetime=hours*3600+minutes*60+seconds  it's not working, when i checked further any of the arithmetic  operation on these three fields(hours,minutes and seconds).   do i need to convert these fields to any other format.
| eval lastmodifiedWeek=strftime(epoc_last_modified,"%Y-%V") |eval timeline="30-Oct-23" | eval timeline_date=strptime(timeline,"%d-%b-%y") |eval new_timeline=strftime(timeline_date,"%Y-%V") |wher... See more...
| eval lastmodifiedWeek=strftime(epoc_last_modified,"%Y-%V") |eval timeline="30-Oct-23" | eval timeline_date=strptime(timeline,"%d-%b-%y") |eval new_timeline=strftime(timeline_date,"%Y-%V") |where lastmodifiedWeek<=new_timeline |join max=0 type=left current_ticket_state [|inputlookup weekly_status_state_mapping.csv|rename Status as current_ticket_state|table current_ticket_state Lookup] | stats count by Lookup lastmodifiedWeek | eval timeline1 = strptime(lastmodifiedWeek." 1", "%Y-%U %w") | eval timeline2=relative_time(timeline1,"-1w@w1") | eval timeline = strftime(timeline2, "%Y-%m-%d") | table timeline , Lookup count |chart values(count) as count over timeline by Lookup |fillnull value=0 |tail 4 |reverse  
Hi Team, I have a table which has counts for these attributes Re-ProcessRequest count,objectType,objectIdsCount,uniqObjectIdsCount,sqsSentCount,dataNotFoundIds  1.How can i make table column arrang... See more...
Hi Team, I have a table which has counts for these attributes Re-ProcessRequest count,objectType,objectIdsCount,uniqObjectIdsCount,sqsSentCount,dataNotFoundIds  1.How can i make table column arrange as my needs,currently dataNotFoundIds shows in second coluld ,,,rather i want to display in last column.similary want to do for other columns too? 2.How can i filter based on the objecttype and do the addcolumntotal and gisplay total count? index="" source IN ""   "support request details" |stats count | rename count as Re-ProcessRequest | join left [ search index="" source IN ""  "input params" OR "sqs sent count" OR "Total messages published to SQS successfully" OR "unique objectIds" OR "data not found for Ids" | rex "\"objectType\":\"(?<objectType>[^\"]+)" | rex "\"objectIdsCount\":\"(?<objectIdsCount>[^\"]+)" | rex "\"uniqObjectIdsCount\":\"(?<uniqObjectIdsCount>[^\"]+)" | rex "\"sqsSentCount\":\"(?<sqsSentCount>[^\"]+)" | rex "\"dataNotFoundIds\":\"(?<dataNotFoundIds>[^\"]+)" | rex "\"totalMessagesPublishedToSQS\":\"(?<totalMessagesPublishedToSQS>[^\"]+)" | table objectType,objectIdsCount,sqsSentCount,totalMessagesPublishedToSQS,uniqObjectIdsCount,dataNotFoundIds | addcoltotals labelfield=total label="Total" | tail 1| stats list(*) as * ] | join [ search index=""source IN "" "dataNotFoundIds" | spath output=payload path=dataNotFoundIds{} | spath input=_raw | stats count by payload | addcoltotals labelfield=total label="Total" | tail 1 | fields - payload,total | rename count as datanotfound]
i want the sub query search result which is a list of tracking id in my main query in clause but none of them are working. subquery and main working individually but after combining both it's not wo... See more...
i want the sub query search result which is a list of tracking id in my main query in clause but none of them are working. subquery and main working individually but after combining both it's not working, i tried with 3 different option but none of the below are working 1. index="dockerlogs-silver" source="*gps-external-processor-prod*" "Handle 500 Server error" OR "Handle 4xx error" | where traceID IN ([search index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) as trackingID | eval trackingid="\"".mvjoin(trackingid,"\",\"")."\""]) 2. index="dockerlogs-silver" source="*gps-external-processor-prod*" "Handle 500 Server error" OR "Handle 4xx error" |where traceID IN ([search index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) as trackingid | table trackingid | stats values(eval("\"".trackingid."\"")) as search delim="," | nomv search]) 3. index="dockerlogs-silver" "Handle 500 Server error" OR "Handle 4xx error" |where traceID IN ([index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) | format]) |table traceID
Hello, When I clicked open in search, I got the following message: Request-URI Too Long The requested URL's length exceeds the capacity limit for this server. I don't get the message if I copy ... See more...
Hello, When I clicked open in search, I got the following message: Request-URI Too Long The requested URL's length exceeds the capacity limit for this server. I don't get the message if I copy and paste the search manually Why does Splunk send searches via GET request? How do I fix this without an admin role? Thank you for your help
I want to deal with big data uising Splunk. To reduce time for searching data, I want to select specific data from original data, pre-process it, and save the output data as csv format. Also I want ... See more...
I want to deal with big data uising Splunk. To reduce time for searching data, I want to select specific data from original data, pre-process it, and save the output data as csv format. Also I want to make dashboard using out data. Please let me know about example of query or helpful article.  
In this dataset, transactions (#3 + #9 + #10 - Mike), and (#5 + #7 +#11  - Alex) -- Would be displayed. # Time User Transaction 1 12:01 David Login from 1.1.1.1 2 12:01 Joe Login f... See more...
In this dataset, transactions (#3 + #9 + #10 - Mike), and (#5 + #7 +#11  - Alex) -- Would be displayed. # Time User Transaction 1 12:01 David Login from 1.1.1.1 2 12:01 Joe Login from 2.2.2.2 3 12:02 Mike Login from 1.1.1.1 4 12:03 David Something else 5 12:05 Alex Login from 1.1.1.1 6 12:06 Mike Something else 7 12:09 Alex Delete table 8 12:10 Joe Delete table 9 12:06 Mike Delete table 10 12:09 Mike Insert Table 11 12:14 Alex Insert Table 12 12:20 David Delete table Looking for one search to find all events where within 10 minutes: 1. User logged in from IP address 1.1.1.1 (Search:  userIP = "1.1.1.1"  transaction="Logged")  2. The same user then deleted a table (Search: databaseAction = "DeleteTable") 3. The same user then inserted a table (Search: databaseAction = "InsertTable")   I can use startswith and endswith with transaction, but this only gives me the first and last event, not the second.
I have a custom solution to forward cloudwatch logs events to splunk cloud.  It works great!  However, i am trying to use a pair of HF configured using fargate containers 4 instances of each.  I am t... See more...
I have a custom solution to forward cloudwatch logs events to splunk cloud.  It works great!  However, i am trying to use a pair of HF configured using fargate containers 4 instances of each.  I am treating them as 4 on the A side and 4 on the B side of an HA configuration.  Im trying to approximate the functionality of the UF > HF autolb in the outputs.conf, only in this case the UF is a Lambda function.  I tried sending events to one HF instance on both the A and B side, but i end up with duplicates for every event, which makes complete sense as there is no auto-dedup. What i want to do for now, is bring up a single HF that receives ALL traffic from all A side and B side HF instances.  I want to configure it to dedup all events and send the result to splunk cloud. Is this doable?  Would it create much latency? How would i configure that, inputs.conf, transforms, props?  (I have outputs covered) Thank You, Mike
Hi Team, Below is my raw logs: 2023-09-29 14:10:05.598 [ERROR] [Thread-3] CollateralFileGenerator - *****************************************FAILURE in sending control file collateral files to ABS ... See more...
Hi Team, Below is my raw logs: 2023-09-29 14:10:05.598 [ERROR] [Thread-3] CollateralFileGenerator - *****************************************FAILURE in sending control file collateral files to ABS Suite!!!***************************************** I want to separate "FAILURE in sending control file collateral files to ABS Suite!!!" as my ERROR message  Can someone guide me on this
Hello, I have a field called "startTime" startTime: 1699148280000 I would like to convert it to human readable. Not having any luck. Any help would be appreciated.
I have events that return different structured fields depending on the value of a field called TYPE.  This all comes from the same sourcetype.  For example: if type=TYPE1, I might have fields called... See more...
I have events that return different structured fields depending on the value of a field called TYPE.  This all comes from the same sourcetype.  For example: if type=TYPE1, I might have fields called: TYPE1.exe, TYPE1.comm, TYPE1.path, TYPE1.filename if type=TYPE2, I might have fields called: TYPE2.comm, TYPE2.path, TYPE2.host As you can see, each type brings a different set of base fields.  We are using data model searches so I want to get these base fields into CIM compliance.   Is there a way to create stanzas in props.conf or transforms.conf that will allow me to field alias these values based on the type value?  I tried straight-out field aliasing in props.conf only to find I was actually overwriting values due to precedence/order of my field alias commands. Thanks in advance,
hello! I have this search, and I want to add more parameters like time etc. the thing is - when I'm using rare its show only the SHA256HashData and count ```index=myindex | stats count by SHA256H... See more...
hello! I have this search, and I want to add more parameters like time etc. the thing is - when I'm using rare its show only the SHA256HashData and count ```index=myindex | stats count by SHA256HashData | rare SHA256HashData any idea? thanks! 
a problem where the cluster master and deployment server in the distributed Splunk environment cannot be logged in via the GUI and state that there are no users. Set up a user
In this blog post, we'll take a look at key terms, best practices, and tools commonly used for Server Monitoring. Splunk can monitor the performance of all your servers, containers, and apps in re... See more...
In this blog post, we'll take a look at key terms, best practices, and tools commonly used for Server Monitoring. Splunk can monitor the performance of all your servers, containers, and apps in real-time with Splunk Infrastructure Monitoring. The term “server monitoring” is complex because of the exceptionally wide range of servers that exist.    More visit Official website: https://www.splunk.com/en_us/blog/learn/server-monitoring/ServiceNow/.html   Regards:  @marksmith991   
i have a splunk array  per_stage_data: [ [-]      { [-]        Stage: S1         TimeTaken: 0h:30m:23s      }      { [-]        Stage: S2         TimeTaken: 0h:52m:36s      }     ] i am im... See more...
i have a splunk array  per_stage_data: [ [-]      { [-]        Stage: S1         TimeTaken: 0h:30m:23s      }      { [-]        Stage: S2         TimeTaken: 0h:52m:36s      }     ] i am implementing a splunk dashboard , in that i want to convert the per_stage_data{}.TimeTaken into seconds. i had tried multiple ways but it didn't worked. tried using the below solution  but it's not giving any output. rex field="_raw" "CallDuration: (?<hours>\d+)h:(?<minutes>\d+)m:(?<seconds>\d+)s" | eval CallDurationInSeconds = ((hours*60*60)+(minutes*60)+(seconds))   appreciate any inputs.   Thanks in advance. 
hi, I have powershell script that does an API call to an external service. how can I use this script in phantom?