All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to complete the lab for my cybersecurity course. I googled few thing for this question, but this question doesn't seem to accept the answer. It is a course from Immersive labs. May be i'm ... See more...
I'm trying to complete the lab for my cybersecurity course. I googled few thing for this question, but this question doesn't seem to accept the answer. It is a course from Immersive labs. May be i'm doing something wrong or any problem with my query. I'm not sure.  I've used the query:- index="_audit" action=* info=* | stats count by user Need your help with this to search login attempts for username=admin.
how can i in the props.conf file tell Splunk to take the second timestamp as opposed to the first
Hi Is anybody can tell me what is the goal of this regex? | regex ImagePath="\\\\\\\\" As far as I know, it seems to search a character chain delimited by 4 backslash? Thanks  
Hello, Does anyone know how to delete an authorization token for no more exisiting account in Splunk? We have tried it in Web, but Splunk "Could not get info for non-existent user" We have ... See more...
Hello, Does anyone know how to delete an authorization token for no more exisiting account in Splunk? We have tried it in Web, but Splunk "Could not get info for non-existent user" We have tried it on servers, too: For curl -k -u <username>:<password> -X DELETE https://<server>:<management_port>/services/authorization/tokens/<token_user> -d id=<token_id> we get: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Could not find object id=xxxxx</msg> </messages> </response> Is there any dir or file where authentication tokens are saved on Search Heads?  We need to get rid of internal errors that we receive for this non-existent user, but without token removal it will not be possible Many thanks in advance for help! Greetings, Justyna
Hi,   Where We can get the community edition of the SPLUNK SOAR as on OVA image for the virtual box.   Thank You, Bhushan Kale
Hello, I am attempting to make a dashboard that will simply show if a host/server is up or down. Basically have a box that is green or red for each server.  Most threads I have seen are fairly old s... See more...
Hello, I am attempting to make a dashboard that will simply show if a host/server is up or down. Basically have a box that is green or red for each server.  Most threads I have seen are fairly old so I am hoping there is a an easier way to show this in either XML or in Dashboard Studio.   Thanks
There is a complicated requirement for me, the splunk beginner. Hope you can give me some advice. The splunk version: 9.0.2303.201 Since there are a lot of logs(events) that meet my search requi... See more...
There is a complicated requirement for me, the splunk beginner. Hope you can give me some advice. The splunk version: 9.0.2303.201 Since there are a lot of logs(events) that meet my search requirement, I want to generate a time chart with those logs.  I want to group those logs by a specific field named "field1": For events in group A, their "field1" value is unique when compared with all other events; For events in group B, their "field1" value has been repeated once when compared with other events, which means when I search the value of "field1"(group B),  it will return two events. Based on this premise,  I want to count the event that happened times of both two groups, and display them in a timeline(time chart), what can I do?
Following the documentation here: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches#Create_a_search_job I expect that a successful REST API call to endpoint "/services/searc... See more...
Following the documentation here: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches#Create_a_search_job I expect that a successful REST API call to endpoint "/services/search/jobs" would return a single job ID as the document shows. However, in my testing, when the call returns with a status of 200 (success), the response data contains an object, which contains 6 keys: Object.keys(jobId) = (6) ['links', 'origin', 'updated', 'generator', 'entry', 'paging'] where, jobId.entry is an array of hundreds of search jobs -- basically the call to create a search job returned a list of all the jobs in the search head. The code (JavaScript) is in this public repository: https://github.com/ww9rivers/splunk-rest-search Am I missing anything? Thank you for your insights!
Hi, Distributed deployment that includes SH Cluster and IDX Cluster, HEC on IDXs is used to receive the data. I want to use ingest time lookups BUT the lookup will need to be refreshed (let's say ... See more...
Hi, Distributed deployment that includes SH Cluster and IDX Cluster, HEC on IDXs is used to receive the data. I want to use ingest time lookups BUT the lookup will need to be refreshed (let's say hourly). Now the question is how will that work? SHs can refresh a lookup and it will be pushed as part of the search bundle to the IDXs, but I don't think IDXs will know how to use it for ingest time lookup (as this bundle is used during search time), would they? The only option I can think of is to run the scheduled search that populates the lookup on Cluster Master but tell it to output the lookup into the `slave_apps` folder, but that will require to push a new IDX bundle every time.....   Any thoughts on how to do it? Thanks.
 We have a requirement to pull security logs for past specific the time ranges -  i.e from December 2022 - Apr 2023, Splunk cannot complete a search without expiring for even a 1 hour window in Decem... See more...
 We have a requirement to pull security logs for past specific the time ranges -  i.e from December 2022 - Apr 2023, Splunk cannot complete a search without expiring for even a 1 hour window in December.   This fails our published 12 month retention period for these logs.  Please provide options for how to Identify, correct, or Improve this Search challenge.       The search job 'SID' was canceled remotely or expired.       Sometimes the GUI shows "Unknown SID".  The version currently used is 8.2.9.
I would like to forward logs from sources coming from udp inputs in a Heavy Forwarder to two splunk clouds with different index names each one. I have a fortinet source coming from server1 (fortine... See more...
I would like to forward logs from sources coming from udp inputs in a Heavy Forwarder to two splunk clouds with different index names each one. I have a fortinet source coming from server1 (fortinet index), a eset source coming from server2 (eset index) and others sending logs to a Heavy Forwarder with udp inputs. For the forwarding to splunk clouds I have two splunkclouduf.spl installed for each one that configure the forwarding at Heavy Forwarding. So I have these apps installed on the Heavy Forwarder: 100_foo1_splunkcloud and 100_foo2_splunkcloud. I would like to: - Send all logs to foo1.splunkcloud.com with predefined index. - Send only fortinet and eset sources to foo2.splunkcloud.com and change the index to foo2_fortinet and foo2_eset respectively. For this scenario I propose this config: $SPLUNK_HOME=/opt/splunk /opt/splunk/etc/system/local/props.conf     [host::server1] TRANSFORMS-routing1=app_foo1 TRANSFORMS-routing2=fortigate_foo2_index,app_foo2 [host::server2] TRANSFORMS-routing3=app_foo1 TRANSFORMS-routing4=eset_foo2_index,app_foo2     /opt/splunk/etc/system/local/transforms.conf     [app_foo1] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=<splunkcloud_foo1> [app_foo2] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=<splunkcloud_foo2> [fortigate_foo2_index] REGEX=. DEST_KEY=_MetaData:Index FORMAT=foo2_fortinet [eset_foo2_index] REGEX=. DEST_KEY=_MetaData:Index FORMAT=foo2_eset     /opt/splunk/etc/apps/100_foo1_splunkcloud/local/outputs.conf     [tcpout:splunkcloud_foo1] <sslPassword for foo1>     /opt/splunk/etc/apps/100_foo2_splunkcloud/local/outputs.conf     [tcpout:splunkcloud_foo2] <sslPassword for foo2>     /opt/splunk/etc/apps/100_foo1_splunkcloud/default/outputs.conf     [tcpout:splunkcloud_foo1] server = <bunch of 15 balanced server of foo1.splunkcloud>     /opt/splunk/etc/apps/100_foo2_splunkcloud/default/outputs.conf     [tcpout:splunkcloud_foo2] server = <bunch of 15 balanced server of foo2.splunkcloud>     Is a valid configuration for the scenario?
I have a query to find the maximum event count that has happened in a minute over time as below index="xxx" "headers.a"="abc" | rename "status.operation_path" as PATH | bucket _time span=1m | stats... See more...
I have a query to find the maximum event count that has happened in a minute over time as below index="xxx" "headers.a"="abc" | rename "status.operation_path" as PATH | bucket _time span=1m | stats count by PATH _time | stats max(count) by PATH The above query displays the maximum event count in a minute VS PATH.  I also need to display the time when this maximum event count happened for each path. 
Hi we want an indexed field called ‘actual_server’ to indicate the hostname of the forwarder that passed us the data. My initial thought process is there are might be two options to achieve... See more...
Hi we want an indexed field called ‘actual_server’ to indicate the hostname of the forwarder that passed us the data. My initial thought process is there are might be two options to achieve this 1- hostname available in the logs. which I think is not correct 2- write the system hostname in transforms.conf I will create an app on CM and roll out this props.conf and transforms.conf against sourcetype=testlog [testlog] TRANSFORMS-netscreen = example [example1] WRITE_META=true FORMAT = actual_server::FORWARDER1 and on search head ields.conf Add the following lines to fields.conf: [actual_server] INDEXED=true Is this correct ?  
  Hello again, I am back to ask for your help, I feel that DB Connect is a headache, I am very confused about its configuration in the part where the errors appear in red at the top where I underst... See more...
  Hello again, I am back to ask for your help, I feel that DB Connect is a headache, I am very confused about its configuration in the part where the errors appear in red at the top where I understand that a step must be done at the driver level. I just need these errors that appear at the top to disappear to be able to configure the identities, connections and inputs where I have no doubt, my problem is basically in these errors that appear at the top. The documentation is only clear for those who have already done the process but for those who are just starting with these first configurations it is very confusing, they are texts full of hyperlinks to other documents and you end up with 5 or 10 other related documents. To start removing these errors, what should I do next? Note: As far as possible please do not share with me links to documentation, I have read them and I have not helped at all, if there is someone who could explain to me as simple as possible I would appreciate it.
Hi Everyone. I have a small dashboard whos purpose is rather simple: input information, when the submit button is clicked, the info will be appended to a lookup. But first, I want to simply display... See more...
Hi Everyone. I have a small dashboard whos purpose is rather simple: input information, when the submit button is clicked, the info will be appended to a lookup. But first, I want to simply display what the user inputs onto a table. However, the submit button is not working. Here is the dashboard code:   <form version="1.1" theme="dark"> <label>Anonomized Dash Name</label> <description>Anonomized Description.</description> <fieldset submitButton="true" autoRun="false"> <input type="text" token="username_token" searchWhenChanged="false"> <label>Username</label> </input> <input type="text" token="email_token" searchWhenChanged="false"> <label>Email</label> </input> <input type="text" token="phone_number_token" searchWhenChanged="false"> <label>Phone Number</label> </input> <input type="text" token="intel_date_token" searchWhenChanged="false"> <label>Intel Date</label> </input> <input type="text" token="ip_address_token" searchWhenChanged="false"> <label>IP Address</label> </input> <input type="text" token="EE_number_token" searchWhenChanged="false"> <label>EE_number</label> </input> <input type="text" token="mailiing_address_token" searchWhenChanged="false"> <label>Mailing Address</label> </input> <input type="text" token="ticket_number" searchWhenChanged="false"> <label>Ticket Number</label> </input> <input type="text" token="intel_type_token" searchWhenChanged="false"> <label>Intel Type</label> <default>ATO</default> </input> <input type="text" token="comments_token" searchWhenChanged="false"> <label>Comments</label> </input> </fieldset> <row> <panel> <table> <title>Results</title> <search> <query>| makeresults | eval username = "$username_token$" | eval callback_num = "$phone_number_token$" | eval email = "$email_token$" | eval intel_date = "$intel_date_token$" | eval src_ip = "$ip_address_token$" | eval "mailing address" = "$mailing_address_token$" | eval EE_number = "$EE_number_token$" | eval ph_num = "$phone_number_token$" | eval "Ticket Number" = "$ticket_number_token$" | eval Comments = "$comments_token$" | table username email EE_number callback_num ph_num intel_date src_ip "mailing address" "Ticket Number" Comments</query> <earliest>-24h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
We have a large (~500 line) report being used to calculate CVE scores and fill a summary index daily, with vulnerabilities from Qualys as the initial input that gets enriched. One of the fields in ... See more...
We have a large (~500 line) report being used to calculate CVE scores and fill a summary index daily, with vulnerabilities from Qualys as the initial input that gets enriched. One of the fields in the ouput is called ICT_ID, and this is supposed to match on the vulnerabilities from Qualys according to a lookup CSV file. If the vulnerability matches, it gets a corresponding ICT_ID, otherwise this field is NULL. The issue is that for our lookup file, there are no unique primary keys. QID (Qualys vuln ID) is the closest thing to a PK in the lookup, but there are multiple rows with the same QID and other fields like IP and host which differ. The requirement for matching a vulnerability to the ICT list is two-fold: 1) the QID must match, but also must match 2) *any* of the following (host, IP, app) *in that order of precedence* ----------------- The following code was implemented, but it seems to only match on the first matching instance of QID in the lookup, which usually breaks the rest of the logic for the other fields which should match.       ``` precedence: (host -> ip -> app -> assetType) && QID ``` | join type=left QID [ | inputlookup ICT_LOOKUP.csv | rename "Exp Date" as Exp_Date | rename * as ICT_LOOKUP-* | rename ICT_LOOKUP-QID as QID ] ```| rename fields from lookup as VM_ICT_Tracker-<field>``` | eval ICTNumber = case( like(entity,'ICT_LOOKUP-host'), 'ICT_LOOKUP-ICTNumber', like(IP,'ICT_LOOKUP-ip'), 'ICT_LOOKUP-ICTNumber', like(app,'ICT_LOOKUP-app'), 'ICT_LOOKUP-ICTNumber', like(assetType,'ICT_LOOKUP-assetType'), 'ICT_LOOKUP-ICTNumber', 1=1,"NULL" ) | rename ICTNumber as ICT_ID      
I have this query that ends like this: | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by loggingObject.methodName, loggingObject.httpReturnCode, date   I then click on "Visual... See more...
I have this query that ends like this: | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by loggingObject.methodName, loggingObject.httpReturnCode, date   I then click on "Visualization".  How do I get the x-axis to be dates instead of the method name?
Hello, I would like to find an add-on for my webex devices.  I know there's the Cisco WebEx Meetings Add-on for Splunk, but wanted to see what every one else is using. The Cisco WebEx Meetings Add-... See more...
Hello, I would like to find an add-on for my webex devices.  I know there's the Cisco WebEx Meetings Add-on for Splunk, but wanted to see what every one else is using. The Cisco WebEx Meetings Add-on inputs.conf is [general_service] start_by_shell = false python.version = python3 sourcetype = cisco:webex:meetings:general:summarysession interval = 60 live = True disabled = 0 [history_service] start_by_shell = false python.version = python3 sourcetype = cisco:webex:meetings:history:meetingusagehistory interval = 86400 endpoints = [] disabled = 0 What does this configuration capture?      Thanks      
Hi, I work for a company with a big AppD footprint.  I frequently find myself disabling Analytics for auto-created BT's for things like a springbean catchall rule, that starts sending data for tons ... See more...
Hi, I work for a company with a big AppD footprint.  I frequently find myself disabling Analytics for auto-created BT's for things like a springbean catchall rule, that starts sending data for tons of tiers. This data is not being used for queries, and it will often consume our data budget until the junk going into Analytics is disabled, but I'm currently disabling these by manually unchecking the box for each one, in Analytics config. I don't understand how these BT's are getting designated for Analytics.  It's not coming from data collectors, and "Enable Analytics for New Applications" is not enabled. How can I stop new BT's from automatically sending data to Analytics? Thanks, Greg
I 've  two fields one is _time and another one is received_time.  I want to get the time differences between these two timestamp.   Logs look like  2023-07-11 11:19:24.964 ..... received_time= 168... See more...
I 've  two fields one is _time and another one is received_time.  I want to get the time differences between these two timestamp.   Logs look like  2023-07-11 11:19:24.964 ..... received_time= 1688574223791 I converted the epoch to human readable but i couldnt get the time differences  between these two timestamp. my search: <query> | rex "received_time\"\:(?<recTime>[^\,]+)" | eval recTime = strftime(recTime/1000, "%Y-%m-%d %H:%M:%S.%3N") | eval diff = recTime - _time | table recTime _time diff but it doesnt show any data on diff. Am I missing something?