All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, is there a way to get older forwarder versions than what are available here: Splunk Universal Forwarder Previous Releases | Splunk   I need a forwarder that works with Windows XP, but I... See more...
Hello, is there a way to get older forwarder versions than what are available here: Splunk Universal Forwarder Previous Releases | Splunk   I need a forwarder that works with Windows XP, but I can't find an installer for a version that old. Also, before anyone asks, yes I know I shouldn't have Windows XP, but it is a disconnected environment and is a requirement by the architecture. I can't change it or remove it, so please don't bother suggesting that. I need a forwarder that can work with XP (and also 2000 and 98se if possible but I doubt those ever existed).
Hello All I got a requirement to Upload Logs to Splunk Out of 5 Hosts 3 are Linux and other 2 are windows The Logs getting picked by Splunk but for Linux but for windows Unable to see Logs from... See more...
Hello All I got a requirement to Upload Logs to Splunk Out of 5 Hosts 3 are Linux and other 2 are windows The Logs getting picked by Splunk but for Linux but for windows Unable to see Logs from 1st to 9th of every month The timestamp for windows server is 1/09/2022, 2/09/2022 and so on till 9/09/2022 %d in props need date to be two digit but here it is just one digit hence the logs are not getting picked   so tried below props for windows with %e: TIME_FORMAT=%e/%m/%Y %H:%M:%S We apply props using sourcetype in general as it was not working tried props with source but still same issue By the way in our infrastructure props are kept in heavy forwarder and are applied at index time Can anyone help on this please?
Hello, I currently have a field that contains a long string over 100+ events and in that field there are varying file sizes (609.9 KB, 1GB, 300B, etc.). What I would like to do is come up with an e... See more...
Hello, I currently have a field that contains a long string over 100+ events and in that field there are varying file sizes (609.9 KB, 1GB, 300B, etc.). What I would like to do is come up with an eval or rex command that will pull out only the file sizes and place them into a new field called File_Size. I tried using the field extractor via the GUI, but it would only pick up a couple of the file sizes and even then still wouldn’t show up in the available fields of my search. I’ve also tried creating different queries, but can’t seem to get capture groups set correctly. Can someone please advise on best approach?
Hi all,  I've just configured a receiver for Apache ActiveMQ, but noticed in journal logs that it takes more than 10s to retrieve logs from the activeMQ endpoint. There is no collection interval in ... See more...
Hi all,  I've just configured a receiver for Apache ActiveMQ, but noticed in journal logs that it takes more than 10s to retrieve logs from the activeMQ endpoint. There is no collection interval in parameter stated in the documentation page of ActiveMQ (https://github.com/signalfx/signalfx-agent/blob/main/docs/monitors/collectd-activemq.md), nor the the genericjmx documentation (https://docs.splunk.com/Observability/gdi/genericjmx/genericjmx.html).  May I know if there is any way to set a custom collection interval for the collectd receiver, or any other receiver, for that matter?
Hello, I found a way to change the line brush to draw dotted lines in line charts on XML dashboards, but not on Dashboard Studio. Is it possible? If yes, is it possible to have multiple types of li... See more...
Hello, I found a way to change the line brush to draw dotted lines in line charts on XML dashboards, but not on Dashboard Studio. Is it possible? If yes, is it possible to have multiple types of lines in one chart (some lines solid, other dotted) ? Thanks in advance
We have a sample local ".txt" file to analyse some logs stored locally in the Heavy Forwarder, in its /tmp/ folder. For this purpose, a sourcetype has been configured in the Heavy Forwarder to parse... See more...
We have a sample local ".txt" file to analyse some logs stored locally in the Heavy Forwarder, in its /tmp/ folder. For this purpose, a sourcetype has been configured in the Heavy Forwarder to parse the log as we wish. All this was set up from the web interface. Back in the day, we were wrong and created the index in the Heavy Forwarder to assign it from the "Input Settings" of the "Add Data" menu. But then we discovered that this should not be done this way. So, we created the index with name "test" in our cluster master and it was replicated correctly to the two peers indexers. The index is now created but with no information. And it does not appear in the Search Head. Unfortunately, when assigning the index where it should be saved from the menu Add Data of the Heavy Forwarder, the index "test" that is created in the Indexers does not appear. In addition, even when the index was created in both indexers and Heavy Forwaders the events wouldn't get to the indexers after selecting the "test" index. In that case it did appear on HF menu I guess because it was created locally there.  
Hi,   I integrated Trendmicro DDI with Splunk using the app. But in DDI, there is a gap in the signature name. Therefore when Splunk is parsing the signature name, it is only showing the first wo... See more...
Hi,   I integrated Trendmicro DDI with Splunk using the app. But in DDI, there is a gap in the signature name. Therefore when Splunk is parsing the signature name, it is only showing the first word and not the rest.  For example if the signature name is "possible scanning activity" , I could see only in Splunk that the signature nae is "Possible" . The remaining is not coming up. Can some one please help with this. This is something very urgent. 
Hello, I have a search that outputs table data that looks like this:       hst code type hosta 01 master hosta 02 master hostb 01 host hostb 03 host hostc 02 host hostd 04 host hoste 05 mas... See more...
Hello, I have a search that outputs table data that looks like this:       hst code type hosta 01 master hosta 02 master hostb 01 host hostb 03 host hostc 02 host hostd 04 host hoste 05 master hoste 06 master hostf 06 host hostg 08 host     etc.etc... I am trying to filter events but i am unable to do. My goal is to filter events based on this condition: If the code on a master also exist on the host, then the host rows should be removed So, my desired output should look like this:     hst code type hosta 01 master hosta 02 master hostb 03 host hostd 04 host hoste 05 master hoste 06 master hostg 08 host     I hope someone can help me. Thanks in advance. Regards, Harry
Hi,   I am trying to mask dataat index time, can you please help ? First line is a result and second is what i would like to be. Thx   "authenticationValue":"AAcBBGJxFAAAAZZANIJZdQAAAAA="... See more...
Hi,   I am trying to mask dataat index time, can you please help ? First line is a result and second is what i would like to be. Thx   "authenticationValue":"AAcBBGJxFAAAAZZANIJZdQAAAAA=" Result  "authenticationValue":"****************************" 
Hello Splunkers, I have a search created below to only detect local ip intel specified manually by the user: | tstats min(_time) as firstSeen max(_time) as lastSeen count from datamodel="Threat_I... See more...
Hello Splunkers, I have a search created below to only detect local ip intel specified manually by the user: | tstats min(_time) as firstSeen max(_time) as lastSeen count from datamodel="Threat_Intelligence"."Threat_Activity" where Threat_Activity.threat_key=local_ip_intel by Threat_Activity.weight Threat_Activity.threat_match_value Threat_Activity.threat_match_field Threat_Activity.src Threat_Activity.dest Threat_Activity.orig_sourcetype Threat_Activity.threat_collection Threat_Activity.threat_collection_key | rename Threat_Activity.* as * | join type=left threat_match_value [| inputlookup local_ip_intel.csv | rename ip as threat_match_value description as desc | fields threat_match_value desc] My goal here is to specify a description next to each local ip threat match to ease up the analysis and specify a reason as to why the intel was inserted there in the first place. The search works properly and when sourcetypes are searched the results do actually show up where they match the local ip intel, however it shows more than what was specified in the local ip list which it is seen trying to match ip's that are not even in the list and so no description can be joined in the result. PS: The SubSearch does show me all the correct IPs I have manually added Would appreciate anyone showing me where I am actually going wrong.
Hi  Consider this event structure :     {"result" : {"dogs" : [{"name" : "dog-a", "food":["pizza", "burger"] }, {"name" : "dog-b", "food":["pasta"] }] }}     Now want to filter the dogs... See more...
Hi  Consider this event structure :     {"result" : {"dogs" : [{"name" : "dog-a", "food":["pizza", "burger"] }, {"name" : "dog-b", "food":["pasta"] }] }}     Now want to filter the dogs by name and present them relevant food. When I try this search(with the relevant index):     result.dogs{}.name = dog_a| table result.dogs{}.food{}     I Am getting this result: pizza burger pasta    I Am expecting to get only dog-a foods(pizza and burger)  
Hello-   I am attempting to create a heat gauge off the average of two timestamp fields to determine average time an issue was worked. I'm running into issues as these fields are stored as string... See more...
Hello-   I am attempting to create a heat gauge off the average of two timestamp fields to determine average time an issue was worked. I'm running into issues as these fields are stored as strings in the ISO 8061 format. I'd like to know if there's a good way to convert this string as simply as possible or to be able to extract certain portions of the string to be able to use a numeric value from the average calculations (ideally extract the MM:SS from the ISO string.)   Thanks!
Hello-   I am attempting to make a table and hopefully be able to integrate it into a dashboard. Goal is to interrogate on two fields and pull stats accordingly. FieldA has multiple values- t... See more...
Hello-   I am attempting to make a table and hopefully be able to integrate it into a dashboard. Goal is to interrogate on two fields and pull stats accordingly. FieldA has multiple values- table is to show all values of FieldA. Utilize stats count for how many daily transactions have been processed by each unique value of FieldA.  Then the portion I am having difficulties with- with the daily count for each unique value of FieldA, I want to interrogate that count by FieldB to see how many of that count is a hit for any value of FieldB. This is the code I am using: table FieldA FieldB | fields "FieldB", "FieldA " | fields "FieldB", "FieldA " | stats count by FieldA , FieldB| sort -"count"   The second count of FieldB hits out of the count of FieldA instances is always showing up as zero, despite having values other than zero in FieldB. FieldB values should all be numeric. 
Hi, I would like to set time ranges from 2 different types of inputs, Dropdown and Time, as shared tokens into a panel. Currently, this is the code I have, the Dropdown has 4 options, and the Time... See more...
Hi, I would like to set time ranges from 2 different types of inputs, Dropdown and Time, as shared tokens into a panel. Currently, this is the code I have, the Dropdown has 4 options, and the Time input appears depending on the last dropdown option. I am stuck on passing the time range from the Time input into the "custom_earliest" and "custom_latest" tokens.     <fieldset submitButton="false"> <input type="dropdown" token="field1"> <label>Time Selection</label> <choice value="yesterday">Yesterday</choice> <choice value="-7d">Last 7 Days</choice> <choice value="mtd">Month To Date</choice> <choice value="custom">Custom Time</choice> <change> <condition label="Yesterday"> <set token="custom_earliest">-7d@d+7h</set> <set token="custom_latest">@d+7h</set> <unset token="showCustom"></unset> </condition> <condition label="Last 7 Days"> <set token="custom_earliest">-4w@d+7h</set> <set token="custom_latest">@d+7h</set> <unset token="showCustom"></unset> </condition> <condition label="Month To Date"> <set token="custom_earliest">-5mon@mon+7h</set> <set token="custom_latest">@d+7h</set> <unset token="showCustom"></unset> </condition> <condition label="Custom Time"> <set token="showCustom">Y</set> </condition> </change> <default>yesterday</default> <initialValue>yesterday</initialValue> </input> <input type="time" token="customTime" depends="$showCustom$"> <label>Time Range</label> <default> <earliest>-3d@d+7h</earliest> <latest>-2d@d+7h</latest> </default> </input> </fieldset>     Any help would be appreciated, thanks!
Dear all, I want to combine 2 search job into 1 job. My first search job is to search all the alert_id occur in the past 24 hours and listed them as a table. 2nd search job is to find among all ... See more...
Dear all, I want to combine 2 search job into 1 job. My first search job is to search all the alert_id occur in the past 24 hours and listed them as a table. 2nd search job is to find among all the alert_id in the first search job and try to match which alert_id has an event of packet filtered . I am able to generate a desired result by using the "map search" index="security_device" sourcetype= security_log "abnormal Protocol" alert_id | table alert_id | map search="search index="security_device" sourcetype=security_log "Filter action" $alert_id$" maxsearches=500 | table filter-discard However, I notice that using a map search is very in-efficient. It is taking forever if I select for 30 days. Can anyone recommend me a better way to do it.  FYI, I have tried the nested search, but no luck, it return a 0 result to me index="security_device" sourcetype=security_log "Filter action" [ search index="security_device" sourcetype=security_log "abnormal Protocol" alert_id | table alert_id ] | table filter-discard Thank you.
I have two values in a field source, I need to hide one i.e., http:kafka  
which apps are used in Splunk soc in a bank ?? for threat intel, incident response, and so on.
Hi all, I am quite new to Splunk and now trying to create a dashboard panel using a query that does the following: pulls the required fields from an index based on textfield input checks on o... See more...
Hi all, I am quite new to Splunk and now trying to create a dashboard panel using a query that does the following: pulls the required fields from an index based on textfield input checks on one specific field "opsID" from the index against a field "code" in a csv i uploaded if it is present in the csv, I just want to return a simple output that I could use to display in a table form The csv looks something like this: code, notes 123, User 456, Admin 789, User   Example of my query: index=userdatabase "abc12345" | eval abc=[|inputlookup Lookup.csv | where code=opsID| fields notes] | eval isPresent=if(abc!="", YES, NO) | table username, isPresent   However I am getting errors like Error in 'eval' command: The expression is malformed. An unexpected character is reached at ')'. I tried for a few days can't seem to figure it out my mistake, hence hoping for some help over my basic question.. I got a feeling my logic could be wrong to begin with
Hi Team, I'm new to Splunk Tool, I just have a question how to hunt below things in Splunk: 1). Investigate net connections associated with GitHub usage. 2). Look for unusual downloads, commen... See more...
Hi Team, I'm new to Splunk Tool, I just have a question how to hunt below things in Splunk: 1). Investigate net connections associated with GitHub usage. 2). Look for unusual downloads, commend line executions / code executions from GitHub.
Hi Splunkers. I have two level of logs (NOTICE,ERROR), for Error logs(json), method_name and message is automatically getting extracted but not for NOTICE logs, So i have written my case statement ... See more...
Hi Splunkers. I have two level of logs (NOTICE,ERROR), for Error logs(json), method_name and message is automatically getting extracted but not for NOTICE logs, So i have written my case statement like below in UI  and its working fine but im not sure how to deploy this in props.conf   index=index_name sourcetype=sourctype_name log_level=NOTICE |eval message =case(method_name='protopayload.table.create'=="table created",method_name='protopayload.table.delete'=="table deleted") i dont want to write case statement for error logs as its already getting extracted fine. to be precise:- i want my fields extraction to happen automatically for error logs (as its getting extracted automatically) and want my case statement work only for notice logs.   Please assist on this