All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can... See more...
I downloaded the app to my HF and send their my alarms from vcenter via snmp. I also condigured the following SNMP input and opened the ports needed. What should I do for it to work so I can see the events (in my sh of course)? What am I missing??
Hi everyone, I have a script.py which requires one argument to run normally, for eg. script.py D:\Downloads\12-Dec-2022\1234\ I am intending to create a custom search command so that I can have... See more...
Hi everyone, I have a script.py which requires one argument to run normally, for eg. script.py D:\Downloads\12-Dec-2022\1234\ I am intending to create a custom search command so that I can have a Splunk Dashboard GUI which allows the user to input the file path i.e D:\Downloads\12-Dec-2022\1234\ and then it will run in the backend this --> script.py D:\Downloads\12-Dec-2022\1234\ and generate a csv file in which I will use the splunk search command to format the data.  My question would be how can I write the generator.py script so that it calls the script.py  I have a template I found:         #!/usr/bin/env python import sys import os sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import \ dispatch, GeneratingCommand, Configuration, Option, validators @Configuration() class %(command.title())Command(GeneratingCommand): """ %(synopsis) ##Syntax %(syntax) ##Description %(description) """ def generate(self): # Put your event code here # To connect with Splunk, use the instantiated service object which is created using the server-uri and # other meta details and can be accessed as shown below # Example:- # service = self.service pass dispatch(%(command.title())Command, sys.argv, sys.stdin, sys.stdout, __name__)           However, I am not sure like how to write it such that this command will accept an argument (eg. file path inputted by the user) So how I Forsee it is I have 3 things 1. Custom search command named mycommand 2. my own script.py which accepts one argument (a file path) use to run and generate stats  3. Splunk search command So once I have the custom search command mycommand I can use it in splunk search  | mycommand <user input>  something like that..however writing the custom search command am not sure how to make it accept an argument inputted for the user in the splunk gui. can anyone help please?
Hi, I have the bellow event:   {"log":"2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359               1 connection.go:153] connecting to UNIX:///csi/csi.sock" I need to remove ... See more...
Hi, I have the bellow event:   {"log":"2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359               1 connection.go:153] connecting to UNIX:///csi/csi.sock" I need to remove 2023-02-16t14:14:25.827471424z stderr F I0216 14:14:25.827359 and have tried rex but unable to do so, just wondered if someone could help me?   Thanks, Joe
Hi folks, I'm evaluating a situation related to enabling SAML auth on SOAR but earlier I was using local accounts. Because of that, objects like assets, playbooks, etc are currently tied to the loca... See more...
Hi folks, I'm evaluating a situation related to enabling SAML auth on SOAR but earlier I was using local accounts. Because of that, objects like assets, playbooks, etc are currently tied to the local user ids, and SAML users have different user ids. I'm looking for ideas on how to update that ownership from local to SAML new user id in order to have the users still owning those objects after changing their login type. Or, another option but that will be unlikely to be something doable, but if I could have the SAML login to use the same user id as the local (like one replace the other) would also be interesting to explore.
Hi Team, I have events being pushed to HTTP event collector 24/7. In my dashboard I query and format the events using transaction command based on a field traceparent. It's working fine, but the rep... See more...
Hi Team, I have events being pushed to HTTP event collector 24/7. In my dashboard I query and format the events using transaction command based on a field traceparent. It's working fine, but the report is only showing 4999 transactions. Is it a limit set on the Splunk server? Where are these limits set and are there any guidelines to increase it without impacting server performance negatively? I also observed that if by 10AM in a day I got 4999 transactions then the new transactions which came after 10AM are not displayed by the query. I have to change the timer to 'last 60 min', 'last 15 min' etc to get the latest ones. Even if my query hits the top line limit of 4999, how to make sure that those 4999 transactions are the latest (from the time the query is executed) and not the old ones? Like if run the query at 2PM, I want to get those 4999 transactions from 2PM down till 11AM etc. How to achieve that? Thank you. 
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN TCODE="ZORF_BOX_CLOSING" SYUCOMM="SICH_T" ACCOUNT=$s_user$ | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) ... See more...
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN TCODE="ZORF_BOX_CLOSING" SYUCOMM="SICH_T" ACCOUNT=$s_user$ | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | timechart avg(RESPTI) as "Average_Execution_Time", avg(DBCALLTI) as "Average_DB_Time", max(RESPTI) AS "Max_Execution_Time", max(DBCALLTI) as "Max_DB_Time" | eval Average_Execution_Time = round(Average_Execution_Time,2), Average_DB_Time=round(Average_DB_Time,2) | eval Max_Execution_Time = round(Max_Execution_Time,2), Max_DB_Time = round(Max_DB_Time,2) | search Average_Execution_Time !="" | search Max_Execution_Time !="" this is the search that i am working with, and in this way it is working fine. However i have to add a span to it. i have a dropdown menu that has the token $span$. when i try to use it also works fine. however what i have to do is use a span that is equal to the time range picker. the token from the timerange picker is $tok_range$, however if i try to use this as a span it will just tell me search is waiting for input. Is there a way to do this?
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: ... See more...
Hello Splunkers, I am trying to create an alert when the log with "UP" state is not received within 15 minutes from the time of "DOWN" state log received. So can anyone help me out... Scenario: When the device is Down the splunk will receive the log from solar-winds that the device is "DOWN" along with the host name in the log. So if the splunk doesn't receive the log that containing the "UP" state from the solar-winds in the next 15 minutes then an alert must be raised. So can anyone help me to create an Query for the alert for the above scenario. Thanks in Advance....
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea ple... See more...
Hi, I'm quite fresh in splunk and need your help. Trying to combine spl with sql. tag 25 is event id same as  sql ele.batch_event_id I suspect ele.batch_event_id = $25$ is wrong. Any idea please Error is : Unable to run query '| dbxquery query= "SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') " connection='stardb' '.   Search: index=star_linux sourcetype=engine_processed_events 2961= BBHCC-S2PBATCHPOS-BO OR BBHCC-S2PBATCHPOS-B2 OR BBHCC-S2PBATCHPOS-PO OR BBHCC-SOD-IF-Weekday-1 AND 55:GEN_STAR_PACE |table 4896,25,55,2961 | map search="| dbxquery query= \"SELECT MIN (ele.process_time) as MIN_PROCESS_time ,MAX (ele.process_time) as MAX_PROCESS_time FROM estar.estar_loopback_events ele, estar.engine_configuration ec WHERE ele.engine_instance = ec.engine_instance AND ele.batch_event_id = $25$ AND process_time BETWEEN TO_DATE('20230215:00:00','YYYYMMDD hh24:mi:ss') and TO_DATE('20230216 12:59:59','YYYYMMDD hh24:mi:ss') \" connection='stardb' " |table 4896, 25,MIN_PROCESS_time, MAX_PROCESS_time
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I ... See more...
Hi Splunk Gurus,  I am new to lookups and this community has been a great help. I have a few cases where I can't seem to remove rows from a lookup correctly and I can't find a solution for it. I have a lookup table that is used to list maintenance windows on servers. My CSV lookup has 3 columns CI,  chgreq, mStart, and mstop. Example: serverA     CHG0001     2023-02-16 00:00     2023-02-17 13:00 I am pulling in emails from an O365 mailbox that allows the adding and clearing of these maintenance windows. Adding new rows to my lookup is working fine but when I try to remove rows I get a blank lookup. Here is the search I am using: index="maintenance_emails" Clear Maintenance | rex field="subject" "Clear Maintenance for (?<server_name>.+)" | inputlookup append=t maintenance_windows.csv | where CI!=server_name | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   The server_name field has the correct server name in it and it matches with a CI entry in my lookup. When I run the search I get a blank lookup table. I have done some testing and it looks like my where statement is not working. I appear to also be having the same issue when trying to remove old maintenance window entries from the same table but using values in the mStop column and comparing them to the current date and time. But this may be a separate issue (i.e. with the date/time format or operation). | eval cur_time=strftime(now(), "%Y-%m-%d %H:%M") | inputlookup append=t maintenance_windows.csv | where mStop<=cur_time | eval CI=server_name, chgreq=chgreq, mStart=mStart, mStop=mStop | outputlookup maintenance_windows.csv   Any help would be very appreciated  
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text mat... See more...
Hello Splunk Community,  So I have a table that has results like below   Name                Tom01 Tom02 Tom03 Tom04 Quin01 Yonah01 Yonah02   I want a query that if the text matches before the numeric' s it will only select the 01 and ignore the other ones. For example: IF Yonah01 and Yonah02 exist this is a pair so it will exclude Yonah02 and just have Yonah01  or another one, if there is Tom01, Tom02, Tom03, Tom04 it will exclude everything except for the Tom01. Thank you.   
Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be beca... See more...
Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be because of a configuration that I have to change? the file is of this format     {"resultsPerPage":344,"startIndex":0,"totalResults":344,"format":"NVD_CVE","version":"2.0","timestamp":"2023-02-15T09:42:40.560","vulnerabilities":[{"cve":{"id":"CVE-2013-10012","sourceIdentifier":"cna@vuldb.com","published":"2023-01-16T11:15:10.037","lastModified":"2023-01-24T15:14:10.117","vulnStatus":"Analyzed","descriptions":[{"lang":"en","value":"A vulnerability, which was classified as critical, was found in antonbolling clan7ups. Affected is an unknown function of the component Login\/Session. The manipulation leads to sql injection. The name of the patch is 25afad571c488291033958d845830ba0a1710764. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-218388."}],"metrics":{"cvssMetricV31":[{"source":"nvd@nist.gov","type":"Primary","cvssData":{"version":"3.1","vectorString":"CVSS:3.1\/AV:N\/AC:L\/PR:N\/UI:N\/S:U\/C:H\/I:H\/A:H","attackVector":"NETWORK","attackComplexity":"LOW","privilegesRequired":"NONE","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"HIGH","integrityImpact":"HIGH","availabilityImpact":"HIGH","baseScore":9.8,"baseSeverity":"CRITICAL"},"exploitabilityScore":3.9,"impactScore":5.9}],"cvssMetricV30":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"3.0","vectorString":"CVSS:3.0\/AV:A\/AC:L\/PR:L\/UI:N\/S:U\/C:L\/I:L\/A:L","attackVector":"ADJACENT_NETWORK","attackComplexity":"LOW","privilegesRequired":"LOW","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"LOW","integrityImpact":"LOW","availabilityImpact":"LOW","baseScore":5.5,"baseSeverity":"MEDIUM"},"exploitabilityScore":2.1,"impactScore":3.4}],"cvssMetricV2":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"2.0","vectorString":"AV:A\/AC:L\/Au:S\/C:P\/I:P\/A:P","accessVector":"ADJACENT_NETWORK","accessComplexity":"LOW","authentication":"SINGLE","confidentialityImpact":"PARTIAL","integrityImpact":"PARTIAL","availabilityImpact":"PARTIAL","baseScore":5.2},"baseSeverity":"MEDIUM","exploitabilityScore":5.1,"impactScore":6.4,"acInsufInfo":false,"obtainAllPrivilege":false,"obtainUserPrivilege":false,"obtainOtherPrivilege":false,"userInteractionRequired":false}]},"weaknesses":[{"source":"cna@vuldb.com","type":"Primary","description":[{"lang":"en","value":"CWE-89"}]}],"configurations":[{"nodes":[{"operator":"OR","negate":false,"cpeMatch":[{"vulnerable":true,"criteria":"cpe:2.3:a:clan7ups_project:clan7ups:*:*:*:*:*:*:*:*","versionEndExcluding":"2013-02-12","matchCriteriaId":"12D82AEE-3A68-4121-811C-C3462BCEAF25"}]}]}],"references":[{"url":"https:\/\/github.com\/antonbolling\/clan7ups\/commit\/25afad571c488291033958d845830ba0a1710764","source":"cna@vuldb.com","tags":["Patch","Third Party Advisory"]}       I would appreciate any help  Thank you
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has a... See more...
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has already been deleted in splunk, like this:   index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user     Then i tried joining it with a list of the current users from the rest api like this:   | rest /services/authentication/users splunk_server=local | fields realname, title | rename title as user | join user type=left [ search index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user ]   This doesnt work and just outputs a list of current users. What i want: List of current splunk users with last login attempt older than 6 months with realname username, last login time. I have tried this solution from javiergn, but i cannot get last login time on that https://community.splunk.com/t5/Splunk-Search/How-do-I-edit-my-search-to-identify-inactive-users-over-the-last/m-p/285256
In the Admin classes configuration precedence was defined for index and search time.  However, since the Splunk UF is neither index nor search, what precedence order does the Splunk UF follow?
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | ... See more...
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | timechart avg(RESPTI) as "Average_Execution_Time" avg(DBCALLTI) as "Average_DB_Time" span=5m | eval Average_Execution_Time = round(Average_Execution_Time,2), Average_DB_Time=round(Average_DB_Time,2) | eventstats | eval UCL='stdev(Average_Execution_Time)'+'mean(Average_Execution_Time)', UCL_DB='stdev(Average_DB_Time)'+'mean(Average_DB_Time)' | eval day_of_week = strftime(_time,"%A") | where day_of_week!= "Saturday" and day_of_week!= "Sunday" | eval New_Field=if(RESPTI >= UCL, 1, 0) | timechart sum(New_Field) span=$span$ This is the search that i am using. I am trying to get a barchart that show the amount of times that the RESPTI goes over the UCL. The problem that i am having is that i cannot compare if RESPTI is bigger than the UCL since it does not want to load in the value. if i try to table it like | table RESPTI, UCL, New_Field then RESPTI will just show up empty.
we have ingested junipet logs as syslogs. trying to create some dashboard for network team Need some dashboard templates for juniper devices log data 
How to configure User experience monitoring for an application, can you provide the steps? Thanks & Regards Anshuman
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer ... See more...
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer PC01 on behalf of user ADMIN JABATAN for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user ADMIN\SUPPORT for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user admin for the following reason: No title for this reason could be found The value i want to extract is: newField ADMIN JABATAN ADMIN\SUPPORT admin   Please assist. Thanks.
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite ... See more...
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite the old backup file but I want the scheduled search to run only when the new entries are added to the file.csv. |inputlookup file.csv |outputlookup file_backup.csv Also, I want to add 2 new columns (user who edited the lookup and time when it was edited) in the backup lookup  Original file: file.csv column1 column2  Backup file file_backup.csv generated using the scheduled search should have the below  column1 column2 time user  Any thoughts please?   Cheers!
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how w... See more...
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how will i get the exact count of particular health rule violated for the specified time period... i want to know that how many times the health rules violated for custom time.
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft... See more...
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft_windows) ion indexer. I have DNS server role installed on the machine. UF is also installed but still not working. I have seen many other blogs but not exactly pointing out the solution. Any help will be appreciated. Thanks