All Topics

Top

All Topics

I have a query to find the maximum event count that has happened in a minute over time as below index="xxx" "headers.a"="abc" | rename "status.operation_path" as PATH | bucket _time span=1m | stats... See more...
I have a query to find the maximum event count that has happened in a minute over time as below index="xxx" "headers.a"="abc" | rename "status.operation_path" as PATH | bucket _time span=1m | stats count by PATH _time | stats max(count) by PATH The above query displays the maximum event count in a minute VS PATH.  I also need to display the time when this maximum event count happened for each path. 
Hi we want an indexed field called ‘actual_server’ to indicate the hostname of the forwarder that passed us the data. My initial thought process is there are might be two options to achieve... See more...
Hi we want an indexed field called ‘actual_server’ to indicate the hostname of the forwarder that passed us the data. My initial thought process is there are might be two options to achieve this 1- hostname available in the logs. which I think is not correct 2- write the system hostname in transforms.conf I will create an app on CM and roll out this props.conf and transforms.conf against sourcetype=testlog [testlog] TRANSFORMS-netscreen = example [example1] WRITE_META=true FORMAT = actual_server::FORWARDER1 and on search head ields.conf Add the following lines to fields.conf: [actual_server] INDEXED=true Is this correct ?  
  Hello again, I am back to ask for your help, I feel that DB Connect is a headache, I am very confused about its configuration in the part where the errors appear in red at the top where I underst... See more...
  Hello again, I am back to ask for your help, I feel that DB Connect is a headache, I am very confused about its configuration in the part where the errors appear in red at the top where I understand that a step must be done at the driver level. I just need these errors that appear at the top to disappear to be able to configure the identities, connections and inputs where I have no doubt, my problem is basically in these errors that appear at the top. The documentation is only clear for those who have already done the process but for those who are just starting with these first configurations it is very confusing, they are texts full of hyperlinks to other documents and you end up with 5 or 10 other related documents. To start removing these errors, what should I do next? Note: As far as possible please do not share with me links to documentation, I have read them and I have not helped at all, if there is someone who could explain to me as simple as possible I would appreciate it.
Hi Everyone. I have a small dashboard whos purpose is rather simple: input information, when the submit button is clicked, the info will be appended to a lookup. But first, I want to simply display... See more...
Hi Everyone. I have a small dashboard whos purpose is rather simple: input information, when the submit button is clicked, the info will be appended to a lookup. But first, I want to simply display what the user inputs onto a table. However, the submit button is not working. Here is the dashboard code:   <form version="1.1" theme="dark"> <label>Anonomized Dash Name</label> <description>Anonomized Description.</description> <fieldset submitButton="true" autoRun="false"> <input type="text" token="username_token" searchWhenChanged="false"> <label>Username</label> </input> <input type="text" token="email_token" searchWhenChanged="false"> <label>Email</label> </input> <input type="text" token="phone_number_token" searchWhenChanged="false"> <label>Phone Number</label> </input> <input type="text" token="intel_date_token" searchWhenChanged="false"> <label>Intel Date</label> </input> <input type="text" token="ip_address_token" searchWhenChanged="false"> <label>IP Address</label> </input> <input type="text" token="EE_number_token" searchWhenChanged="false"> <label>EE_number</label> </input> <input type="text" token="mailiing_address_token" searchWhenChanged="false"> <label>Mailing Address</label> </input> <input type="text" token="ticket_number" searchWhenChanged="false"> <label>Ticket Number</label> </input> <input type="text" token="intel_type_token" searchWhenChanged="false"> <label>Intel Type</label> <default>ATO</default> </input> <input type="text" token="comments_token" searchWhenChanged="false"> <label>Comments</label> </input> </fieldset> <row> <panel> <table> <title>Results</title> <search> <query>| makeresults | eval username = "$username_token$" | eval callback_num = "$phone_number_token$" | eval email = "$email_token$" | eval intel_date = "$intel_date_token$" | eval src_ip = "$ip_address_token$" | eval "mailing address" = "$mailing_address_token$" | eval EE_number = "$EE_number_token$" | eval ph_num = "$phone_number_token$" | eval "Ticket Number" = "$ticket_number_token$" | eval Comments = "$comments_token$" | table username email EE_number callback_num ph_num intel_date src_ip "mailing address" "Ticket Number" Comments</query> <earliest>-24h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
We have a large (~500 line) report being used to calculate CVE scores and fill a summary index daily, with vulnerabilities from Qualys as the initial input that gets enriched. One of the fields in ... See more...
We have a large (~500 line) report being used to calculate CVE scores and fill a summary index daily, with vulnerabilities from Qualys as the initial input that gets enriched. One of the fields in the ouput is called ICT_ID, and this is supposed to match on the vulnerabilities from Qualys according to a lookup CSV file. If the vulnerability matches, it gets a corresponding ICT_ID, otherwise this field is NULL. The issue is that for our lookup file, there are no unique primary keys. QID (Qualys vuln ID) is the closest thing to a PK in the lookup, but there are multiple rows with the same QID and other fields like IP and host which differ. The requirement for matching a vulnerability to the ICT list is two-fold: 1) the QID must match, but also must match 2) *any* of the following (host, IP, app) *in that order of precedence* ----------------- The following code was implemented, but it seems to only match on the first matching instance of QID in the lookup, which usually breaks the rest of the logic for the other fields which should match.       ``` precedence: (host -> ip -> app -> assetType) && QID ``` | join type=left QID [ | inputlookup ICT_LOOKUP.csv | rename "Exp Date" as Exp_Date | rename * as ICT_LOOKUP-* | rename ICT_LOOKUP-QID as QID ] ```| rename fields from lookup as VM_ICT_Tracker-<field>``` | eval ICTNumber = case( like(entity,'ICT_LOOKUP-host'), 'ICT_LOOKUP-ICTNumber', like(IP,'ICT_LOOKUP-ip'), 'ICT_LOOKUP-ICTNumber', like(app,'ICT_LOOKUP-app'), 'ICT_LOOKUP-ICTNumber', like(assetType,'ICT_LOOKUP-assetType'), 'ICT_LOOKUP-ICTNumber', 1=1,"NULL" ) | rename ICTNumber as ICT_ID      
I have this query that ends like this: | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by loggingObject.methodName, loggingObject.httpReturnCode, date   I then click on "Visual... See more...
I have this query that ends like this: | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by loggingObject.methodName, loggingObject.httpReturnCode, date   I then click on "Visualization".  How do I get the x-axis to be dates instead of the method name?
Hello, I would like to find an add-on for my webex devices.  I know there's the Cisco WebEx Meetings Add-on for Splunk, but wanted to see what every one else is using. The Cisco WebEx Meetings Add-... See more...
Hello, I would like to find an add-on for my webex devices.  I know there's the Cisco WebEx Meetings Add-on for Splunk, but wanted to see what every one else is using. The Cisco WebEx Meetings Add-on inputs.conf is [general_service] start_by_shell = false python.version = python3 sourcetype = cisco:webex:meetings:general:summarysession interval = 60 live = True disabled = 0 [history_service] start_by_shell = false python.version = python3 sourcetype = cisco:webex:meetings:history:meetingusagehistory interval = 86400 endpoints = [] disabled = 0 What does this configuration capture?      Thanks      
Hi, I work for a company with a big AppD footprint.  I frequently find myself disabling Analytics for auto-created BT's for things like a springbean catchall rule, that starts sending data for tons ... See more...
Hi, I work for a company with a big AppD footprint.  I frequently find myself disabling Analytics for auto-created BT's for things like a springbean catchall rule, that starts sending data for tons of tiers. This data is not being used for queries, and it will often consume our data budget until the junk going into Analytics is disabled, but I'm currently disabling these by manually unchecking the box for each one, in Analytics config. I don't understand how these BT's are getting designated for Analytics.  It's not coming from data collectors, and "Enable Analytics for New Applications" is not enabled. How can I stop new BT's from automatically sending data to Analytics? Thanks, Greg
I 've  two fields one is _time and another one is received_time.  I want to get the time differences between these two timestamp.   Logs look like  2023-07-11 11:19:24.964 ..... received_time= 168... See more...
I 've  two fields one is _time and another one is received_time.  I want to get the time differences between these two timestamp.   Logs look like  2023-07-11 11:19:24.964 ..... received_time= 1688574223791 I converted the epoch to human readable but i couldnt get the time differences  between these two timestamp. my search: <query> | rex "received_time\"\:(?<recTime>[^\,]+)" | eval recTime = strftime(recTime/1000, "%Y-%m-%d %H:%M:%S.%3N") | eval diff = recTime - _time | table recTime _time diff but it doesnt show any data on diff. Am I missing something?  
I am beginner and i want to create something like this my Splunk search1 is  index=XXX source="/opt/middleware/ibm/" findsachinattendance |timechart count span=60m | stats max(*) AS * my ... See more...
I am beginner and i want to create something like this my Splunk search1 is  index=XXX source="/opt/middleware/ibm/" findsachinattendance |timechart count span=60m | stats max(*) AS * my Splunk search2 is  index=XXX source="/opt/middleware/ibm/" findtendulkarattendance |timechart count span=60m | stats max(*) AS *   I found something but i couldnt relate to work  https://community.splunk.com/t5/Splunk-Search/How-to-create-a-Table-where-each-row-is-the-result-of-a-query/m-p/545512
Hi I have a field called ObjectD which is always different for each events But in this field, there is always à character chain which begins by OU= and DC= Example OU=Admin,  OU=toto, OU=Utilsate... See more...
Hi I have a field called ObjectD which is always different for each events But in this field, there is always à character chain which begins by OU= and DC= Example OU=Admin,  OU=toto, OU=Utilsateur, DC=abc, DC=def I need to filter the events where OU=Admin or OU=Utilisateurs and DC=abc So i am doing this in my search after the stats | where match(ObjectD,"OU=Admin|OU=Utilisateurs),DC=abc") But it returns anything I also need to create a new field with the name of the OU but because the first clause doesnt works the rex command doesnt works too  Here is my rex | rex field=ObjectD "^[^=]+=[^=]+=(?<OU>[^,]+)" Could you help please?
i have two drop down panels  Basically when i select any value in Monitored statistics the Divisor value should change and its working but problem is i can see the value of the previous field a... See more...
i have two drop down panels  Basically when i select any value in Monitored statistics the Divisor value should change and its working but problem is i can see the value of the previous field as well like for "CPU used by" in monitored statistics has the value is in DIVISOR as 100 but it also shows the previous  value1000000 in the divisor of another field how can i just get only one value in DIVISOR 
Hi Team, we are trying to add new field  as a display name into interesting field from below raw event DisplayName: sample-Hostname We tried the below query but it is not working  |... See more...
Hi Team, we are trying to add new field  as a display name into interesting field from below raw event DisplayName: sample-Hostname We tried the below query but it is not working  | rex field=_raw \"DisplayName", "Value":\s(?<DisplayName>\w+). And also please suggest us how to create a query if the user logged in one or more devices. Thanks in advance!
So we have roughly a dozen UF hosts across on-prem and cloud. All are uploading data directly to SplunkCloud. I have had reports from other teams about decent gaps in reporting when they perform sear... See more...
So we have roughly a dozen UF hosts across on-prem and cloud. All are uploading data directly to SplunkCloud. I have had reports from other teams about decent gaps in reporting when they perform searches. For example, performing a query like: index=it_site1_network for the last 2 hours. Currently has two large gaps of 25 minutes each. Now before you ask what the activity level is on this index source, it's very high. There should be a few thousand events every minute. I've checked $SPLUNK_HOME/var/log/splunk/splunkd.log to ensure the files monitored are indeed being monitored. And overall system resource util is very low (cpu, mem, disk, net). My question is, is the metrics.log the only place to look for issues that might affect something like this?
Hi, we have several Universal Forwarders managed by a Deployment Server that occasionally "lose" applications and stop sending logs to Indexers and are no longer connected to the Deployment Server.... See more...
Hi, we have several Universal Forwarders managed by a Deployment Server that occasionally "lose" applications and stop sending logs to Indexers and are no longer connected to the Deployment Server. The only way to reconnect these UFs is to reinstall the connection apps to the DS manually by logging into the host, and then manage it again from the DS. How does this happen? Is there any other way to reconnect these UFs to the DS without necessarily logging in? Thanks, Mauro  
RE: Case #3270697 After upgrade to 9.1.01 not able to send emails eg. of critical alerts! [ ref:_00D409oyL._5005a2bGRKI:ref ] After upgrade to v9.1.0.1 Splunk Enterprise, (single instance), last w... See more...
RE: Case #3270697 After upgrade to 9.1.01 not able to send emails eg. of critical alerts! [ ref:_00D409oyL._5005a2bGRKI:ref ] After upgrade to v9.1.0.1 Splunk Enterprise, (single instance), last weekend (15 Juli 2023) + changing admin password as was suggested by Assist (which throws an error now !?) 1) Message when using sendemail: Smpt setting: O365 Checked the login on O365, ofcourse 2) Assist stopped running???   3) Also: 3a 3b   4) new GUI / layout ?   5) Annoying and not working “Don’t show this again” message on every page. Just stepping to another dashboard on the same server/domain ??  6) endless waiting:   What is next? Anyone else suffering from the same issues?
Hi, I have enabled a email alert and its working fine. I want to add to add a URL link in email body , but its picking as normal text. Is there any way I can add the link to the email body? Thank... See more...
Hi, I have enabled a email alert and its working fine. I want to add to add a URL link in email body , but its picking as normal text. Is there any way I can add the link to the email body? Thanks in advance  
Hi,  I'm having an issue with timestamping on one unstructured sourcetype (others json and access_log are fine).  My deployment looks like UF->HF->Splunk cloud.  For some reason data from the ment... See more...
Hi,  I'm having an issue with timestamping on one unstructured sourcetype (others json and access_log are fine).  My deployment looks like UF->HF->Splunk cloud.  For some reason data from the mentioned sourcetype is delayed by 1 hour. I mean, I have to increase seachrtime to >60m to see the latest data. Below is the output of a query to compare index time and _time.  I tried to change timestamp extraction is sourcetype configuration in the cloud, but it didn't help. I come up with idea to transform INGEST_EVAL expression in a transforms stanza in transforms.conf to update the _time field at ingest time after it has been parsed out from the actual event (+3600s)  #transforms.conf  [time-offset] INGEST_EAVL = _time:=_time+3600 #props.conf  [main_demo] TRANSFORMS=time-offset   I suppose there is no transforms.conf equivalent in Splunk GUI (props.conf can be configured in source type GUI section). Do I need to contact Splunk support to perform this kind of change in cloud indexer?  Or maybe there is any other way to align _time to reflect real time? All help would be appreciated, regards, Szymon      
Hi Team, We have installed dotNetAgentSetup64-23.6.0.10056 in my machine. And we are trying to profiler .NET framework 3.5 application. Early my application working fine without any issue but after ... See more...
Hi Team, We have installed dotNetAgentSetup64-23.6.0.10056 in my machine. And we are trying to profiler .NET framework 3.5 application. Early my application working fine without any issue but after installing appDynamics .NET agent it got some issues. In Event viewer, I got the below message .NET Runtime version 2.0.50727.9171 - Fatal Execution Engine Error (00007FFB20973E86) (80131506) and another error message as, Faulting application name: w3wp.exe, version: 6.2.20348.1, time stamp: 0x405e4c14 Faulting module name: mscorwks.dll, version: 2.0.50727.9171, time stamp: 0x64501630 Exception code: 0xc0000005 Fault offset: 0x0000000000255939 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 Faulting package full name: %14 Faulting package-relative application ID: %15 Kindly let me know, Why i am getting this issue after installing Appd .NET agent. And how to profile my .NET framework 3.5 application with APPD? Thankyou.