All Topics

Top

All Topics

Hi Legends, I want to know is this type of splunk query possible to create? We want a query which will pull 2 types of data, for ex. I ran a query by selecting time-picker as last 4 hours, so it ... See more...
Hi Legends, I want to know is this type of splunk query possible to create? We want a query which will pull 2 types of data, for ex. I ran a query by selecting time-picker as last 4 hours, so it will pull the data of last 4 hours from current time i.e. 09/03/2023 11:30 AM to 09/03/2023 03:30 PM now along with this data it should also pull last month's data for similar timeframe i.e. 09/02/2023 11:30 AM to 09/02/2023 03:30 PM. The purpose of this query is to see the Month on Month growth .
Hi All, I have a requirement to Monitor whether the Database is running or down and send an alert and this should be monitored at OS level and the Database is running in the Linux OS   Can an... See more...
Hi All, I have a requirement to Monitor whether the Database is running or down and send an alert and this should be monitored at OS level and the Database is running in the Linux OS   Can anyone help me please how to achieve this?
1st query       index=mail NOT [ | inputlookup suspicoussubject_keywords.csv | rename keyword AS query | fields query ] | lookup email_domain_whitelist domain AS RecipientDomain output domain... See more...
1st query       index=mail NOT [ | inputlookup suspicoussubject_keywords.csv | rename keyword AS query | fields query ] | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) as recipient values(subject) as subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" by RecipientDomain sender | where mvcount(recipient)=1 | eval subject_count=mvcount(subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest")       2nd query       index=o365 | dedup Id | rename _time as DateTime, PolicyDetails{}.PolicyName as PolicyName, PolicyDetails{}.Rules{}.RuleName as RuleName, ExchangeMetaData.UniqueID as UniqueID, ExchangeMetaData.Subject as Subject, ExchangeMetaData.From as Sender, ExchangeMetaData.To{} as Recipient, ExchangeMetaData.CC{} as CC, ExchangeMetaData.BCC{} as BCC, ExchangeMetaData.RecipientCount as RecipientCount, PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.Count as SensitiveInformationCount, PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.SensitiveInformationDetections.DetectedValues{}.Name as PIIName, PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.SensitiveInformationDetections.DetectedValues{}.Value as PIIValue, PolicyDetails{}.Rules{}.ConditionsMatched.SensitiveInformation{}.Location as Location | dedup UniqueID | rex field=Recipient "@(?<domain>.*$)" | rex field=CC "@(?<domain>.*$)" | rex field=BCC "@(?<domain>.*$)" | eval domain=lower(domain) | lookup email_domain_whitelist domain output domain as domain_match | where isnull(domain_match) | stats values(Recipient) values(CC) values(BCC) values(domain) Count sum(SensitiveInformationCount) by PolicyName Subject Sender | sort +values(domain)       hi i would like to combine the first query into the second query , but the second query only shows those matching the policy , other than that it does not show. i want to show those matching the policy and if does  not match also , please show it, but policy field will be empty.  please advise. index will be o365.
Hello Splunkers!! As mentioned below we have these two files which are carrying payload events which we monitored already. But on a daily basis with new timestamp we want to monitor new files and d... See more...
Hello Splunkers!! As mentioned below we have these two files which are carrying payload events which we monitored already. But on a daily basis with new timestamp we want to monitor new files and delete the existing monitored files from that path. Is there any mechanism to achieve this. WPLAT_order_2023-03-07T14-35-21.669Z.json WPLAT_order_2023-03-08T15-45-30.232Z.json Suppose for example : day 1 :  under D:\\splunk folder we are monitoring below two files. WPLAT_order_2023-03-07T14-35-21.669Z.json WPLAT_order_2023-03-08T15-45-30.232Z.json day 2: we need to delete day 1 files from folder D:\\ and monitor the new created files with new timestamps. WPLAT_order_2023-03-09T11-35-21.669Z.json WPLAT_order_2023-03-10T12-45-30.232Z.json      
If we are using the Dell PowerScale Add-on for REST API calls, are the following syslog steps needed?  What is the purpose of syslog forwarding to a Splunk forwarder if the Add-on performs REST API c... See more...
If we are using the Dell PowerScale Add-on for REST API calls, are the following syslog steps needed?  What is the purpose of syslog forwarding to a Splunk forwarder if the Add-on performs REST API calls to the Isilon cluster to pull this data?   To enable forwarding syslog data in any Isilon Cluster version, perform the following steps: Make following changes in file /etc/mcp/override/syslog.conf (copy from /etc/mcp/default/syslog.conf if not present): Put @<forwarders_ip_address> in front of the required log file and !* at the end of the syslog.conf file. Restart syslogd using this command - /etc/rc.d/syslogd restart. In some cases, syslog.conf file is already placed at /etc/mcp/override directory location but it is empty. In that case, just put the log file name and the forwarder ip in that file. Below is the content of sample syslog.conf: auth.* @<forwarders_ip_address> !audit_config *.* @<forwarders_ip_address> !audit_protocol *.* @<forwarders_ip_address> !* Run the following commands to enable protocol, config and syslog auditing according to Isilon OneFS version: For Dell Isilon cluster with oneFS version 9.x.x: isi audit settings global modify --protocol-auditing-enabled Yes isi audit settings global modify --config-auditing-enabled Yes isi audit settings global modify --config-syslog-enabled Yes isi audit settings modify --syslog-forwarding-enabled Yes
I am trying to make 2 searches using different indexes and sources The first search is looking for all entries with "message sent" and the "Message Id". The second search is looking for the "Messag... See more...
I am trying to make 2 searches using different indexes and sources The first search is looking for all entries with "message sent" and the "Message Id". The second search is looking for the "Message Id" and "message sent".  I am trying to find the number of messages that were sent but not received by comparing the message ID, and list the message IDs that were not sent Example Attempt: (index = exampleindex1 source = examplesource1 ("message sent" AND "MessageId")  OR (index = exampleindex2 source = examplesource2 ("message received" AND "MessageId")  | rex field=_raw "MessageId:(?<messageId>[\S]+)\s.*"  | stats values(*) as * by messageId  
Hi AppDynamic team, I have faced an issue when enable the code obfuscation in ProGuard after upgrade appDynamic version from 20.11.1 to 22.2.2 . also add below code into progurad-rule.pro -keep cl... See more...
Hi AppDynamic team, I have faced an issue when enable the code obfuscation in ProGuard after upgrade appDynamic version from 20.11.1 to 22.2.2 . also add below code into progurad-rule.pro -keep class com.appdynamics.eumagent.runtime.DontObfuscate -keep @com.appdynamics.eumagent.runtime.DontObfuscate class * { *; } it failed in   > Task :app:appDynamicsProcessProguardMappingDevRelease FAILED Execution failed for task ':app:appDynamicsProcessProguardMappingDevRelease'. > Bad Credentials, please verify the account and licenseKey in your build.gradle file. But I could confirm that the LicenseKey was correct and did not changed,(it works fine in version 20.1.1)   Could you please help me out for this? Thanks
Hi , I have an alert scheduled to run every day 7 am and this runs on Time Range : Yesterday. Wanted to know how Splunk interprets this. If today is a Thursday, am I supposed to get all data unti... See more...
Hi , I have an alert scheduled to run every day 7 am and this runs on Time Range : Yesterday. Wanted to know how Splunk interprets this. If today is a Thursday, am I supposed to get all data until Wednesday  7 am as a cutoff time? or is it Wednesday 23:59:59 as the cut off time? Please advise on the above. Thanks in advance.  
I'm trying to use spath to extract fields from a json object in an event.   This is the event 2023-03-08T22:47:06.66452157Z app_name=assistedonboardi environment=e1 ns=assistedonboarding-intra pod... See more...
I'm trying to use spath to extract fields from a json object in an event.   This is the event 2023-03-08T22:47:06.66452157Z app_name=assistedonboardi environment=e1 ns=assistedonboarding-intra pod_container=assistedonboardi pod_name=assistedonboardi-deployment-19-64w7w stream=stdout message={"schemaVersion":"0.3.0","application":{"name":"One App","version":"5.15.5-fec34698"},"device":{"gent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"level":"error","timestamp":"2023-03-08T22:47:06.664Z","error":{"name":"ClientReportedError","message":"<none>","stacktrace":"<none>"},"request":{"address":{"uri":"https://test?authBlueCorrelationId=234dkfhdf&redirects=1"},"metaData":{"moduleID":"axp-global-onboarding-corporate-application-capture-us","opportunityId":"testid","companyName":"","cdacApplicationStatus":"","marketIso2":"US","correlationId":"a17a9feb-eb54-40ae-951e-f6648e02ab88"}}} message contains the json object and I want for example to extract the opportunityId so I'm trying this. ns=assistedonboarding-intra AND axp-global-onboarding-corporate-application-capture-us | fields message | spath output=opportunityId path=message.request.metaData.opportunityId But nothing actually happens. Any help would be appreciated.    
Hello All, I have been scouring the community and other boards but for the life of me cannot create a SPL query to get the results I need. Below is what I am trying to accomplish, any direction and ... See more...
Hello All, I have been scouring the community and other boards but for the life of me cannot create a SPL query to get the results I need. Below is what I am trying to accomplish, any direction and help would be greatly appreciated. I have 2 sourcetypes and 1 lookup table. SourceA has fields (ID_a, name, date, trimmed_name, env, etc..) SourceB has fields (ID, title_id, title, description solution, etc...) Lookup has fields (trimmed_name, department, etc...) SourceA gets updated weekly, therefore in the existing query I have earliest=-7d to exclude previous data Characteristics of SourceA: name is the only unique field, ID_a will be duplicated across name depending on date field, env is duplicated across the dataset, trimmed_name is a field created from trimming name SourceA ID_a and SourceB ID are the common field between the two datasets Characteristics of SourceB: there are duplicates of ID, title_id, title, description, solution. When deduping title_id, values of title, description, solution, etc... are dedup'd and become unique. This is why I called SourceB a knowledge base. We can dedup SourceB down to the fields above to get a finite list. SourceB will have multiple values of ID. But we only need to return fields\values where the dedup'd SourceA ID_a exists. This means from 100 events in SourceA, dedup'd by ID_a, results in 2 ID_a. These 2 ID_a values is what we to find in Source B and return title_id, title, description solution values for. At a minimum, final results needed per source are:  SourceA: name, date, env SourceB: title_id, title, description, solution (other fields can be omitted, assuming we can add specific fields back as needed) Lookup: department (other fields can be omitted, assuming we can add specific fields back as needed) SourceA: ID_a name trimmed_name date ABC ABC_n AB 01/01/2023 ABC ABC_n1 AB 01/02/2023 ABC ABC_n2 AB 01/03/2023 XYZ XYZ_n XY 02/01/2023 XYZ XYZ_n1 XY 02/02/2023   SourceB: ID title_id title description solution ABC 12345 ABC_t ABC_d ABC_s ABC 23456 ABC_t1 ABC_d1 ABC_s1 ABC 93648 ABC_t2 ABC_d2 ABC_s2 XYZ 23456 XYZ_t XYZ_d XYZ_s XYZ 38840 XYZ_t1 XYZ_d1 XYZ_s1 MNO 43245 MNO_t MNO_d MNO_s MNO 36485 MNO_t1 MNO_d1 MNO_s1 RST 84678 RST_t RST_d RST_s   Lookup Table: trimmed_name department AB ABC_dep XY XYZ_dep MN MNO_dep   Intended Result: sourceA.name sourceB.title_id sourceB.title sourceB.description sourceB.solution sourceA.date lookup.department ABC_n 12345 ABC_t ABC_d ABC_s 01/01/2023 ABC_dep ABC_n 23456 ABC_t1 ABC_d1 ABC_s1 01/01/2023 ABC_dep ABC_n 93648 ABC_t2 ABC_d2 ABC_s2 01/01/2023 ABC_dep ABC_n1 12345 ABC_t ABC_d ABC_s 01/02/2023 ABC_dep ABC_n1 23456 ABC_t1 ABC_d1 ABC_s1 01/02/2023 ABC_dep ABC_n1 93648 ABC_t2 ABC_d2 ABC_s2 01/02/2023 ABC_dep ABC_n2 12345 ABC_t ABC_d ABC_s 01/03/2023 ABC_dep ABC_n2 23456 ABC_t1 ABC_d1 ABC_s1 01/03/2023 ABC_dep ABC_n2 93648 ABC_t2 ABC_d2 ABC_s2 01/03/2023 ABC_dep XYZ_n 23456 XYZ_t XYZ_d XYZ_s 02/01/2023 XYZ_dep XYZ_n 38840 XYZ_t1 XYZ_d1 XYZ_s1 02/01/2023 XYZ_dep XYZ_n 23456 XYZ_t XYZ_d XYZ_s 02/02/2023 XYZ_dep XYZ_n 38840 XYZ_t1 XYZ_d1 XYZ_s1 02/02/2023 XYZ_dep Query, I have been playing with... From what I gather, I have events for the query but nothing displays when I try to get data from SourceB and SourceA at the same time.   index="myindex" (sourcetype="SourceB" type=INFO) OR (sourcetype="SourceA" date<2023-02-14 env=envA) | rename ID_a AS ID | fields ID, name, title, description, solution, date, env | stats count values(*) AS * values(date) values(title) values(env) BY name | where count=1  
I have a search where I have multiple evals to check if items are true of false. With my results I want to show something like: Search Triggered Scheduled Test TestAlert1 True... See more...
I have a search where I have multiple evals to check if items are true of false. With my results I want to show something like: Search Triggered Scheduled Test TestAlert1 True True True   Currently what I am getting is something like this: Search Triggered Scheduled Test TestAlert1 True False False TestAlert1 False True False TestAlert1 False False True I am thinking I need to use xyseries chart but am not sure.
I am working on a query to report on host/s that have triggered two different event types. For example windows event IDs 4697 and 4698, if triggered by the same host, rule must alert.   EventType... See more...
I am working on a query to report on host/s that have triggered two different event types. For example windows event IDs 4697 and 4698, if triggered by the same host, rule must alert.   EventType =4697 EventType =4698 HostName=What is the best way to imply host name being unique to the eventtypes. To further clarify, if the same host triggers 4697 and 4698 in a 5 minute window, I want to report on that.  Thanks in advance.
The search table is empty other than _time for message. Can you please advise how to disable kay values in table. Sample message  { "timestamp": "2023-03-05 19:06:43,978+0000", "level": "INFO",... See more...
The search table is empty other than _time for message. Can you please advise how to disable kay values in table. Sample message  { "timestamp": "2023-03-05 19:06:43,978+0000", "level": "INFO", "location": "request:201", "message": "CSSRequestId=12312311-sdgdgdbbsaas;ProcessingRegion=us-east-1;RequestStatus=Completed;Platform=;RequestId=12312311-869a-3932-97d1-sdgdgdbbsaas--123123;ResponseStatusCode=200;PlatformBuckets=['e1--application','e2-application'];DestKey=Dev/20/03/05/14/01-01-0-File.xml;Source=external;SourceKey=abcded/xyz/file.xml;", "service": "gwy", "cold_start": true, "function_name": "GWY-IB", "function_memory_size": "208", "function_arn": "arn:aws:us-east-3:ib", "function_request_id": "xxxxxxxxxxxxxx", "xray_trace_id": "1-xxxxxxxx" }   Search index: index="text" RequestStatus RequestID | table RequestStatus, RequestID,PlatformBuckets,ResponseStatusCode _time   index="text" RequestStatus RequestID | rex "RequestStatus = (?<RequestStatus>\S+)" | rex "RequestID = ?[\S+](?<RequestID>[\S+]*)" | table RequestStatus, RequestID,PlatformBuckets,ResponseStatusCode _time  
Hello, I am performing the following search to extract the time taken to upload   index=* my_search |rex "\[upload\] executed in (?<ut>\d+\w+)" the above extracts values like 343ms, 8s30ms, 1... See more...
Hello, I am performing the following search to extract the time taken to upload   index=* my_search |rex "\[upload\] executed in (?<ut>\d+\w+)" the above extracts values like 343ms, 8s30ms, 11s404ms How would I extract the seconds portion, convert it into ms and add it to ms so that I can get the upload time always in ms please?  
Hi , I m new to splunk and still exploring. I have created a timechart with a span on 10 mins . The timechart has a sharedtime picker which gets updated on based on time selected on timepicker. I h... See more...
Hi , I m new to splunk and still exploring. I have created a timechart with a span on 10 mins . The timechart has a sharedtime picker which gets updated on based on time selected on timepicker. I have added a drilldown option on timechart when on click link to search and the results should display in new tab. I m passing TimeRange as Tokens. What I m trying to achieve is when a user clicks on the timechart at any datapoint,it should display the results with all events that happened the  in past 5 min of clicked timestamp. Some how I m not sure how to set the earliest and latest time dynamically in the search link.       
I feel like this should  be a simple solution but I can't find it. So my search gives values that were present from a group both yesterday and today, but I want to extract those that are not present ... See more...
I feel like this should  be a simple solution but I can't find it. So my search gives values that were present from a group both yesterday and today, but I want to extract those that are not present both days. My search is currently doing this: Group Values_ today Values_ yesterday Count_ today Count_ yesterday change a 111 333 444 555 111 222 333 444 555 4 5 -1 b 111 222 333 111 222 333 3 3 0 c 111 222 333 666 111 222 333 444 555 666 4 6 -2 d 111 222 333 111 222 3 2 +1   Here is the desired output: Group Values_ today Values_ yesterday Count_ today Count_ yesterday change Missing_from_ today Missing_from_ yesterday a 111 333 444 555 111 222 333 444 555 4 5 -1 222   b 111 222 333 111 222 333 3 3 0     c 111 222 333 666 111 222 333 444 555 666 4 6 -2 444 555   d 111 222 333 111 222 3 2 +1   333
I would like to dynamically create AWS Metadata inputs using Splunk's REST API. When referencing Splunk's documentation, it looks to be incomplete and incorrect. https://docs.splunk.com/Documentatio... See more...
I would like to dynamically create AWS Metadata inputs using Splunk's REST API. When referencing Splunk's documentation, it looks to be incomplete and incorrect. https://docs.splunk.com/Documentation/AddOns/released/AWS/APIreference Has anyone done this?
I have a lookup which contains varying information which is linked to a user field in the lookup. I need a way to to be able to allow users to see this lookup but only see the rows which are linked t... See more...
I have a lookup which contains varying information which is linked to a user field in the lookup. I need a way to to be able to allow users to see this lookup but only see the rows which are linked to their user account. I can do this in search by using a rest command to pull the users user name and match this to the user field but there's nothing stopping them from just removing the rest line and seeing everything thats in that lookup. I've also tried moving the data into an index and having the rest built into the search filter but splunk doesn't allow subsearches in searchfilter so this also doesn't work. So i need a way to auto apply the filter whenever anyone tries to access the lookup and/or index, I believe there may be a way to use python to add a role which will allow them to run the filtered search on a dashboard and then remove the role once the search has been completed but my knowledge of python is nil.  Has anyone got any experience with building or working with something similar?   | inputlookup lookup.csv | search     [| rest /services/authentication/current-context/context     | table username     | rename username as user]
Hello Members, This is a great source of information and help. I am using the Splunk Add on for Squid Proxy. I am getting data from the squid access log in the recommended splunk format. The dash... See more...
Hello Members, This is a great source of information and help. I am using the Splunk Add on for Squid Proxy. I am getting data from the squid access log in the recommended splunk format. The dashboard that comes with the install is find. I am creating another dashboard based on that dashboard. I am monitoring bytes_in, bytes_out, and bytes. It seems that the Splunk search for bytes is the total of bytes_in and bytes_out. I am using a search that returns the sum of bytes_out using | status sum(bytes_out) for all src_ips. like this: index=squid | stats sum(bytes_out) as TotalBytes | eval gigabytes=TotalBytes/1024/1024/1024 | table gigabytes I do the same thing for "bytes"  Is there some way I can create a visualization, using a single value viz, so I can show bytes per time? Like x bytes / hour, maybe even one of those gauges? I would like to use a time picker with this as well - or a selectable span, etc Would the "timechart" allow me to do this? Thanks so much, eholz1  
Hello, I am stuck on a query and need someone's help please.  The goal of the query is to perform a lookup on column A and B which is a list of hostnames and FQDN's that are the targeted scope to per... See more...
Hello, I am stuck on a query and need someone's help please.  The goal of the query is to perform a lookup on column A and B which is a list of hostnames and FQDN's that are the targeted scope to perform the extended lookup.  I need to find out what new local accounts have been created AND who created them.  OS scope would be Windows for now, however i will need to do this search on *NIX servers as well. The query itself works, but i don't know if the input scope is being targeted for sure or what is the best practice method.  There are 2 columns im focused on in the csv, "name" and "fqdn". I have done extensive research on this and one article mentions to put in [] brackets after the main query, but then another article states to put in the inputlookup query as first string and remaining questions next.  Let me know what is right/wrong or reasons why to do either way? Here is my query:   sourcetype=wineventlog source="WinEventLog:Security" (EventCode=4720 OR EventCode=624) | eval CreatedBy = mvindex(Account_Name,0) | eval New_User = mvindex(Account_Name,1) | search CreatedBy=* | table _time ComputerName EventCode CreatedBy New_User name ip_address | sort by ComputerName, _time [|inputlookup Servers.csv |fields fqdn, name |lookup Servers fqdn AS ComputerName, name AS ComputerName ]