All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My Ad-hoc searches getting auto-cancelled randomly. I am running them with admin privileges. There's no problem with RAM. DO not have any limits.conf or authorize.conf under system/local
there are some malware enabled by word Macro and run VB script to communicate to outside. I want to find out what are the hosts open word document, VB script is executed (I don't know the word file n... See more...
there are some malware enabled by word Macro and run VB script to communicate to outside. I want to find out what are the hosts open word document, VB script is executed (I don't know the word file name), any search can help? thanks
I have one txt file, only one column, the txt file has around 60 SHA-256 hashes. these hashes are from malicious files. I want to search the system any hosts associated with these hashes, what would ... See more...
I have one txt file, only one column, the txt file has around 60 SHA-256 hashes. these hashes are from malicious files. I want to search the system any hosts associated with these hashes, what would be the search string I need to pass to splunk? thanks
Recently did a linux OS upgrade. On running healthchecks I noticed splunkd is not running on my index Cluster Master. Should it be? I am thinking not, since it's just for pushing updates to my Cluste... See more...
Recently did a linux OS upgrade. On running healthchecks I noticed splunkd is not running on my index Cluster Master. Should it be? I am thinking not, since it's just for pushing updates to my Cluster, as I understand.   Thanks.
HI, splunker,   I'm using Splunk_TA_nix, monitoring /var/log, and my problem is that different event types in this directory are assigned different hosts, for example The first eventtype , /var/... See more...
HI, splunker,   I'm using Splunk_TA_nix, monitoring /var/log, and my problem is that different event types in this directory are assigned different hosts, for example The first eventtype , /var/log/messages, host is "aaabbb" “Aug 2 12:28:26 aaabbb systemd: Removed slice User Slice of pcp.” The name of the host is extracted from the event Another eventtype : /var/log/secure,host is ”CCCDDDD“ “Aug 2 08:25:42 aaabbb sshd[53313]: Accepted password for root from 192.168.3.145 port 55419 ssh2“ the host is "CCCDDD" ,which coming from splunk's default host name Please tell me how to set up a unified host name for this Add-on, using splunk's default host name CCCDDD
Searching for last one brute force login attempt failure from sources and host  getting multiple results,so need to setup threshold to tune the traffic  
Imagine the following data set: STUDENT EOY_GRADE GENDER STUDENT_STATUS Alice 96 Female ACTIVE Bob 94 Male ACTIVE Candice 92 Female FORMER Debbie 94 Female FORMER Ed... See more...
Imagine the following data set: STUDENT EOY_GRADE GENDER STUDENT_STATUS Alice 96 Female ACTIVE Bob 94 Male ACTIVE Candice 92 Female FORMER Debbie 94 Female FORMER Eddie 94 Male FORMER Frank 96 Male FORMER   And I would like the produce the following output comparing current students to former: STUDENT EOY_GRADE PREV_GENDER_AVG PREV_CLASS_AVG CURRENT_CLASS_AVG Alice 96 93 94 95 Bob 94 95 94 95 Thanks in advance for consideration and thoughts
We have java based rest service A with logfile a.log and another rest service B with log b.log when A receives a request it creates a unique request id (let call ABigRequesiId) and splits this reque... See more...
We have java based rest service A with logfile a.log and another rest service B with log b.log when A receives a request it creates a unique request id (let call ABigRequesiId) and splits this request into multiple smaller requests with unique request ids (ASmallerRequestId1,ASmallRequestId2,ASmallRequestId3 .....) for each request and send these requst to service B So I am able to search based on "ABigRequestId" inside a.log and look for an event CREATE_SMALLER_REQUESTS and get all (ASmallerRequestId1,ASmallRequestId2,ASmallRequestId3) , But now with these requestIds I want to search b.log and look for other events in b.log , I am not able to do this in the same query Can you please suggest ?
I would like to know how can I reset my password or check my user to enter to the Splunk app because I can't I just created but idont know maybe I type incorrectly, thanks.
Hello  How does a user uninstall an app using CLI? I entered the following in the terminal  ./splunk remove app [appname] -auth <username>:<password>   I got an error -parse error near `\n' ... See more...
Hello  How does a user uninstall an app using CLI? I entered the following in the terminal  ./splunk remove app [appname] -auth <username>:<password>   I got an error -parse error near `\n' Thanks Splunk Newbie
Hi,   I have a complicated dashboard that is based off of scheduled saved report. All menus and panels are fed off of one report. This report already has _time field. I am loking back max 30 days o... See more...
Hi,   I have a complicated dashboard that is based off of scheduled saved report. All menus and panels are fed off of one report. This report already has _time field. I am loking back max 30 days of data and this reflects in my adjustment in time picker as well. I want to be able to use timepicker in dashboard when I filter for various times. When I use saved search and token in below code, all works fine. But when I use saved search a a reference, tme picker does not work. What am I missing. Thanks in advance!! <form theme="light"> <search ref="TEST_au1_1" id="Main_Search"> <earliest>$timerange.earliest$</earliest> <latest>$timerange.latest$</latest> </search> <label>TEST AU 1</label> <fieldset autoRun="true" submitButton="true"> <input type="time" token="timerange" searchWhenChanged="false"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="servertype" searchWhenChanged="false"> <label>Server Type</label> <initialValue>TEST</initialValue> <fieldForLabel>Server_Type</fieldForLabel> <fieldForValue>Server_Type</fieldForValue> <search base="Main_Search"> <query>| stats dc(Server_Type) AS count By Server_Type</query> </search> <default>TEST</default> </input> <input type="dropdown" token="accttype" searchWhenChanged="false"> <label>Account Type</label> <choice value="*">All</choice> <initialValue>*</initialValue> <fieldForLabel>UserType</fieldForLabel> <fieldForValue>UserType</fieldForValue> <search base="Main_Search"> <query>| search Server_Type="$servertype$" | stats dc(UserType) AS count By UserType</query> </search> <default>*</default> </input> <input type="dropdown" token="user" searchWhenChanged="false"> <label>User Filter:</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <search base="Main_Search"> <query>| search Server_Type="$servertype$" AND UserType="$accttype$" | stats dc(User) as count BY User | fields User</query> </search> <fieldForLabel>User</fieldForLabel> <fieldForValue>User</fieldForValue> </input> <input type="dropdown" token="priority" searchWhenChanged="false"> <label>Priority Filter</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <search base="Main_Search"> <query>| search Server_Type="$servertype$" AND UserType="$accttype$" AND User="$user$" | stats dc(Priority) as count by Priority | fields Priority | sort order</query> </search> <fieldForLabel>Priority</fieldForLabel> <fieldForValue>Priority</fieldForValue> </input> <input type="dropdown" token="results" searchWhenChanged="false"> <label>Results</label> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>Result</fieldForLabel> <fieldForValue>Result</fieldForValue> <search base="Main_Search"> <query>| search Server_Type="$servertype$" AND UserType="$accttype$" AND User="$user$" AND Priority="$priority$" | stats dc(Result) as count by Result | fields Result</query> </search> <choice value="*">All</choice> </input> </fieldset> <row depends="$hide$"> <panel> <title>This panel is for hiding unnccessary time ranges since we are using max 30 Days of data as per requirements. DO NOT Delete this panel.</title> <html> <p> <style> div[data-test="other-column"], div[data-test="real-time-column"], button[data-test^='Previous business week'], button[data-test^='Business week to date'], button[data-test^='Year to date'], button[data-test^='Previous week'], button[data-test^='Previous month'], button[data-test^='Previous year'], button[data-test^='Yesterday'], div[data-test-panel-id^='real'], div[data-test-panel-id^='relative'], div[data-test-panel-id^='dateTime'], div[data-test-panel-id^='date'], div[data-test-panel-id^='advanced'] { display:none !important; } </style> </p> </html> </panel> </row> <row> <panel> <title>Events Timechart</title> <chart> <search base="Main_Search"> <query>| search UserType="$accttype$" AND User="$user$" AND Priority="$priority$" AND Result="$results$" | timechart count</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.chart">line</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.resultTruncationLimit">1000000</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisStart</option> <option name="charting.legend.placement">none</option> <option name="link.exportResults.visible">0</option> <option name="link.inspectSearch.visible">0</option> <option name="link.openPivot.visible">0</option> <option name="link.openSearch.visible">0</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">0</option> </chart> </panel> </row> </form>
Hello, I am trying to get metrics from RouterOS using scripting (logs are forwarded using UDP) I end up with all timestamps 3 hours in the future (tried adding TZ = GMT, didn't help) I created a cu... See more...
Hello, I am trying to get metrics from RouterOS using scripting (logs are forwarded using UDP) I end up with all timestamps 3 hours in the future (tried adding TZ = GMT, didn't help) I created a custom format like this: `script, debug <TIMESTAMP> metric_name=firewall_rule <OTHER.DIMS..> packets=100 bytes = 11` Example: script, debug aug/01/2020 17:35:14 +03:00:00 metric_name=firewall_rule rule=dummy bytes=12345 packet=40 I also tried doing   | makeresults | eval test=strptime("aug/01/2020 17:35:14 +03:00:00", "%b/%d/%Y %T %::z")   And I get correct UNIX timestamp in query results transforms.conf     [metric-schema:log2metrics_mikrotik_keyvalue] METRIC-SCHEMA-MEASURES-firewall_rule = packets, bytes     props.conf     [log2metrics_mikrotik_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:log2metrics_mikrotik_keyvalue NO_BINARY_CHECK = true TRANSFORMS-EXTRACT = field_extraction category = Metrics pulldown_type = 1 # RouterOS # mmm/dd/yyyy HH:MM:SS [+-]TZHH:TZMM:TZSS TIME_FORMAT = %b/%d/%Y %T %::z TZ = GMT disabled = false SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = PREAMBLE_REGEX = script,debug  
Have Indexer Cluster. Have settings set to Search Factor 2, Replication Factor 3. I have 5 Indexer Peers at the moment. I'm getting the following messages on a number of buckets within the "Fixup Tas... See more...
Have Indexer Cluster. Have settings set to Search Factor 2, Replication Factor 3. I have 5 Indexer Peers at the moment. I'm getting the following messages on a number of buckets within the "Fixup Tasks - Pending". They don't seem to be going away after a number of hours. Fixup Reason: streaming failure - src=XXXXXX tgt=XXXX failing=tgt Current Status: Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing=( default: 1 ) What has gone wrong here?
hello , i have many logs like: "_time1 user=A eventid =45" "_time2 user=A eventid=46" "_time3 user=A eventid=48" "_time4 user=B eventid=45" "_time5 user=A eventid=46" i want to transaction new ... See more...
hello , i have many logs like: "_time1 user=A eventid =45" "_time2 user=A eventid=46" "_time3 user=A eventid=48" "_time4 user=B eventid=45" "_time5 user=A eventid=46" i want to transaction new event like: "_time1 user=A eventid=45 _time2 user=A eventid=46 _time3 user=A eventid=48"
Hi everyone,  I have a column chart visualized by this SPL search: index="orders" | eval TimeOfTheDay=case(date_hour>=0 AND date_hour<=6,"Dawn", date_hour>6 AND date_hour<=12,"Morning", date_hou... See more...
Hi everyone,  I have a column chart visualized by this SPL search: index="orders" | eval TimeOfTheDay=case(date_hour>=0 AND date_hour<=6,"Dawn", date_hour>6 AND date_hour<=12,"Morning", date_hour>12 AND date_hour<=18,"Afternoon", date_hour>18 AND date_hour<=23,"Evening") | eval orderedTimeOfTheDay=case(TimeOfTheDay=="Dawn",1,TimeOfTheDay=="Morning",2,TimeOfTheDay=="Afternoon",3,TimeOfTheDay=="Evening",4) | stats count(order_id) as Orders by orderedTimeOfTheDay, TimeOfTheDay | table TimeOfTheDay, Orders And here is the chart visualized:          How I can set different colors for each column in the above visualization? Thank you in advance.  
how to get free access to Splunk Fundamental with my Upwardly Global Account?
Hi, I am new to SPlunk and I have the following CPU Architecture running Debian Buster 10: processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 6.00 Features : half thumb fastmult ... See more...
Hi, I am new to SPlunk and I have the following CPU Architecture running Debian Buster 10: processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 6.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10 Can splunk enterprise will be able run on this system or do I have to use splunk forwarder only?  
Hi I copy several log file in this path /opt/logs/ add directory to Splunk, all file index and import to Splunk expect single log file with 15GB size! any idea? Thanks,
I will need an additional day to complete the final quiz for Fundamentals 3 if this doesn’t come up in next couple of hours. It is 10:30 pm (EST) here
Hello I am trying to get substr in init but it looks like it's not working. the problem is $data_type$ value is "(data_type=state)" or "(data_type=country)" or "(data_type=state OR country)" I ... See more...
Hello I am trying to get substr in init but it looks like it's not working. the problem is $data_type$ value is "(data_type=state)" or "(data_type=country)" or "(data_type=state OR country)" I need a way to create another token like cur_data which can get the substr of $data_type$ from idx 8 -12 I tried but substr in inti did not work... <eval token="cur_data">substr($data_type$, 8,12)</eval> data_type field is part of fieldset and taking from user from dashboard filter       <init> <set token="STATE_DATA">STATE_1, STATE_2, STATE_3, STATE_4</set> <set token="COUNTRY_DATA">COUNTRY_1, COUNTRY_2, COUNTRY_3, COUNTRY_4 </set> <eval token="cur_data">substr($data_type$, 8,12)</eval> <eval token="VCS_TYPES">if($cur_data$='state', $STATE_DATA$, $COUNTRY_DATA$)</eval> </init>