Activity Feed
- Posted Did my search work or not? on Splunk Search. 01-20-2022 09:20 AM
- Posted Re: Moving beyond noob queries (comparing results) on Splunk Search. 08-01-2020 09:24 PM
- Posted Re: Moving beyond noob queries (comparing results) on Splunk Search. 08-01-2020 08:26 PM
- Posted Re: Moving beyond noob queries (comparing results) on Splunk Search. 08-01-2020 06:35 PM
- Posted Moving beyond noob queries (comparing results) on Splunk Search. 08-01-2020 06:11 PM
- Got Karma for Schema Accelerated Event Search performance. 06-05-2020 12:50 AM
- Got Karma for advanced json handling. 06-05-2020 12:50 AM
- Got Karma for Re: What is the best way to get 100ish Greeen/Yellow/Red circles as tightly as possible in a visualization?. 06-05-2020 12:50 AM
- Got Karma for Re: What is the best way to get 100ish Greeen/Yellow/Red circles as tightly as possible in a visualization?. 06-05-2020 12:50 AM
- Karma Re: Search SPL to show messages menu for niketn. 06-05-2020 12:49 AM
- Got Karma for Can you help me craft a search that returns all indexes with their associated retention times?. 06-05-2020 12:49 AM
- Karma Re: Search formatting in Splunk 6.5.0 for easier readability for lquinn. 06-05-2020 12:48 AM
- Karma Re: Stymied by subsearches for woodcock. 06-05-2020 12:48 AM
- Karma Re: Stymied by subsearches for the_wolverine. 06-05-2020 12:48 AM
- Karma Re: Stymied by subsearches for gcusello. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk 6.5 is ignoring https_proxy in splunk-launch.conf. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk 6.5 is ignoring https_proxy in splunk-launch.conf. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk 6.5 is ignoring https_proxy in splunk-launch.conf. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk 6.5 is ignoring https_proxy in splunk-launch.conf. 06-05-2020 12:48 AM
- Got Karma for Re: In SPL, which one has better performance when used in search queries: "case" or "if"?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
1 | |||
1 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 |
01-20-2022
09:20 AM
I've been trying to resolve this since October and not getting traction. Turning to the community for help: I have seemingly contradictory information within the same log line makes me question- do we have an issue or not? On the one hand, i think i do because the history command shows the search is cancelled... and I trust this information. However, there are artifacts in the logs that make me question if the search is fully running (which appears to be true since "fully_completed_search=TRUE"... so I am now confused if we have a problem or not.) Why do searches show fully_completed_search=TRUE and has_error_warn=FALSE when the info field (and history command) show "cancelled" and have a tag of "error" BOTTOM LINE QUESTION: Are my searches are running correctly and returning all results or not? Sample _audit log search activity that I found - not sure if this gives any usable insight Audit:[timestamp=10-01-2021 16:31:40.338, user=redacted_user, action=search, info=canceled, search_id='1633105804.108286', has_error_warn=false, fully_completed_search=true, total_run_time=18.13, event_count=0, result_count=0, available_count=0, scan_count=133645, drop_count=0, exec_time=1633105804, api_et=1633104900.000000000, api_lt=1633105800.000000000, api_index_et=N/A, api_index_lt=N/A, search_et=1633104900.000000000, search_lt=1633105800.000000000, is_realtime=0, savedsearch_name="", search_startup_time="1270", is_prjob=false, acceleration_id="98DCBC55-D36C-4671-93CD-1A950D796EC4_search_redacted_user_311d202b50b71a64", app="search", provenance="N/A", mode="historical_batch", workload_pool=standard_perf, is_proxied=false, searched_buckets=53, eliminated_buckets=0, considered_events=133645, total_slices=331408, decompressed_slices=11305, duration.command.search.index=120, invocations.command.search.index.bucketcache.hit=53, duration.command.search.index.bucketcache.hit=0, invocations.command.search.index.bucketcache.miss=0, duration.command.search.index.bucketcache.miss=0, invocations.command.search.index.bucketcache.error=0, duration.command.search.rawdata=2533, invocations.command.search.rawdata.bucketcache.hit=0, duration.command.search.rawdata.bucketcache.hit=0, invocations.command.search.rawdata.bucketcache.miss=0, duration.command.search.rawdata.bucketcache.miss=0, invocations.command.search.rawdata.bucketcache.error=0, roles='redacted', search='search index=oswinsec (EventID=7036 OR EventID=50 OR EventID=56 OR EventID=1000 OR EventID=1001) | eval my_ts2 = _time*1000 | eval indextime=_indextime |table my_ts2,EventID | rename EventID as EventCode']
... View more
08-01-2020
09:24 PM
HIT DANG!!! I FINALLY GOT IT! | makeresults
| eval STUDENT="ALICE" |eval EOY_GRADE=96 |eval GENDER="FEMALE" |eval STUDENT_STATUS="ACTIVE"
| append [ makeresults | eval STUDENT="BOB" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="ACTIVE"]
| append [ makeresults | eval STUDENT="CANDICE" |eval EOY_GRADE=92 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="DEBBIE" |eval EOY_GRADE=94 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="EDDIE" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="FRANK" |eval EOY_GRADE=96 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
| eval MALE_GRADE=if(GENDER="MALE" AND STUDENT_STATUS="FORMER",EOY_GRADE,"")
| eval FEMALE_GRADE=if(GENDER="FEMALE" AND STUDENT_STATUS="FORMER",EOY_GRADE,"")
| eval PREVIOUS_GRADE=if(STUDENT_STATUS="FORMER",EOY_GRADE,"")
| eval CURRENT_GRADE=if(STUDENT_STATUS="ACTIVE",EOY_GRADE,"")
| eval STUDENT_STRING=STUDENT.",".EOY_GRADE.",".GENDER.",".STUDENT_STATUS
| stats avg(CURRENT_GRADE) AS CURRENT_CLASS_AVG, avg(MALE_GRADE) AS PREV_MALE_AVG, , avg(FEMALE_GRADE) AS PREV_FEMALE_AVG, , avg(PREVIOUS_GRADE) AS PREV_CLASS_AVG,list(STUDENT_STRING) AS STUDENTS
| mvexpand STUDENTS
| search STUDENTS="*,ACTIVE"
| rex field=STUDENTS "(?<STUDENT>.*),(?<EOY_GRADE>.*),(?<GENDER>.*),(?<STUDENT_STATUS>.*)"
| eval PREV_GENDER_AVG=if(GENDER="MALE",PREV_MALE_AVG,PREV_FEMALE_AVG)
| table STUDENT,EOY_GRADE,PREV_GENDER_AVG,PREV_CLASS_AVG,CURRENT_CLASS_AVG
... View more
08-01-2020
08:26 PM
Getting closer... but still no dice: | makeresults
| eval STUDENT="ALICE" |eval EOY_GRADE=96 |eval GENDER="FEMALE" |eval STUDENT_STATUS="ACTIVE"
| append [ makeresults | eval STUDENT="BOB" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="ACTIVE"]
| append [ makeresults | eval STUDENT="CANDICE" |eval EOY_GRADE=92 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="DEBBIE" |eval EOY_GRADE=94 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="EDDIE" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="FRANK" |eval EOY_GRADE=96 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
| stats list(STUDENT) AS STUDENTS,list(GENDER) AS GENDERS,list(eval(if(GENDER="MALE" AND STUDENT_STATUS="FORMER",EOY_GRADE,""))) as MALE_GRADES, list(eval(if(GENDER="FEMALE" AND STUDENT_STATUS="FORMER",EOY_GRADE,""))) as FEMALE_GRADES,list(eval(if(STUDENT_STATUS="FORMER",EOY_GRADE,""))) as PREVIOUS_GRADES,list(eval(if(STUDENT_STATUS="ACTIVE",EOY_GRADE,""))) as CURRENT_GRADES by STUDENT_STATUS
... View more
08-01-2020
06:35 PM
To help, here is an SPL query to preload the data: | makeresults
| eval STUDENT="ALICE" |eval EOY_GRADE=96 |eval GENDER="FEMALE" |eval STUDENT_STATUS="ACTIVE"
| append [ makeresults | eval STUDENT="BOB" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="ACTIVE"]
| append [ makeresults | eval STUDENT="CANDICE" |eval EOY_GRADE=92 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="DEBBIE" |eval EOY_GRADE=94 |eval GENDER="FEMALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="EDDIE" |eval EOY_GRADE=94 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
| append [ makeresults | eval STUDENT="FRANK" |eval EOY_GRADE=96 |eval GENDER="MALE" |eval STUDENT_STATUS="FORMER"]
|table STUDENT,EOY_GRADE,GENDER,STUDENT_STATUS
... View more
08-01-2020
06:11 PM
Imagine the following data set: STUDENT EOY_GRADE GENDER STUDENT_STATUS Alice 96 Female ACTIVE Bob 94 Male ACTIVE Candice 92 Female FORMER Debbie 94 Female FORMER Eddie 94 Male FORMER Frank 96 Male FORMER And I would like the produce the following output comparing current students to former: STUDENT EOY_GRADE PREV_GENDER_AVG PREV_CLASS_AVG CURRENT_CLASS_AVG Alice 96 93 94 95 Bob 94 95 94 95 Thanks in advance for consideration and thoughts
... View more
12-11-2019
04:46 PM
I executed the following SPL with makeresults, but the results only give me the fields for _time and _raw... i don't get parsed fields. Can this be solved?
|makeresults count=100|eval _raw="Process Create:
UtcTime: 2017-04-28 22:08:22.025
ProcessGuid: {a23eae89-bd56-5903-0000-0010e9d95e00}
ProcessId: 6228
Image: C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
CommandLine: \"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe\" --type=utility --lang=en-US --no-sandbox --service-request-channel-token=F47498BBA884E523FA93E623C4569B94 --mojo-platform-channel-handle=3432 /prefetch:8
CurrentDirectory: C:\Program Files (x86)\Google\Chrome\Application\58.0.3029.81\
User: LAB\rsmith
LogonGuid: {a23eae89-b357-5903-0000-002005eb0700}
LogonId: 0x7EB05
TerminalSessionId: 1
IntegrityLevel: Medium
Hashes: SHA256=6055A20CF7EC81843310AD37700FF67B2CF8CDE3DCE68D54BA42934177C10B57
ParentProcessGuid: {a23eae89-bd28-5903-0000-00102f345d00}
ParentProcessId: 13220
ParentImage: C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
ParentCommandLine: \"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe\" "
... View more
11-25-2019
05:01 AM
2 Karma
I used a Bubble Chart (or Scatter Chart ) and set all my bubbles to the same size. Key chart configs include:
X axis interval : 1
Mark Min size : 1
Mark Max size : 17
Key to the success of the graph are the 1st 3 columns...
Column 1 should correspond to the value that you want colored. In my case, i used four: Okay , Caution , Concern & Critical
Column 2 is your X value
Column 3 is your Y value
| windbag
| eval lenSample=len(sample)
| eval positionStr="Position "+position
| eval health = case(lenSample<20,"Okay",lenSample<30,"Caution",lenSample<40,"Concern",lenSample>=0,"Critical")
| eval y=trunc(position/10)+1
| eval x=trunc(position%10)+1
| table health,x,y, positionStr
... View more
06-19-2019
11:42 AM
in the sample set above, splunk would have 3 events. It recognizes 2 fields: user AND h.hist{}{}
... View more
06-19-2019
11:32 AM
It is worth noting that the history is variable. Some users have a single machine, some have 2 machines, others have 15....
... View more
06-19-2019
11:23 AM
1 Karma
i have a simplified data set that shows users and the number of times they have been seen using a given computer. I want to use this to GUESS their primary computer. Simple, right? I think i'm missing something then... Here's my data set
{
"user" : "user1",
"h" : {
"hist" : [
[
"computer1",
76
]
]
}
}
{
"user" : "user2",
"h" : {
"hist" : [
[
"computer2",
4
],
[
"computer3",
80
]
]
}
}
{
"user" : "user3",
"h" : {
"hist" : [
[
"computer4",
213
],
[
"computer5",
83
]
]
}
}
Results should be like:
user1 : computer1
user2 : computer3
user3 : computer4
... View more
03-22-2019
09:30 AM
Can you provide an example? I tested and my experience differs. I thought extract simply broke apart key/value pairs.
... View more
03-16-2019
08:59 PM
This is something slightly different although i'll give you a nod that the "|from datamodel" appears terribly broken. Here's the background... i was talking with a Splunk employee who was lauding the recent benefits in Splunk. Specifically, he said that the data models now include a "hidden" pointer back to the actual raw event. This means you can search a data model to get the speed benefits of accelerated data models BUT your search can now return the FULL raw event- not just the data contained within the data model. Clearly this is SUPER useful because this opens a world of new possibilities. The obvious limitation is that the initial search constraint must be in the data model itself. It is also worth noting this same feature was mentioned by David Veuve in his Security Ninjitsu preso @ .conf2018.
The problem is that it doesn't work as advertised. 😞
... View more
03-07-2019
08:30 AM
1 Karma
I am super stoked about the potential of Schema Accelerated Event Searches- might be one of the best improvements i've seen if i could actually get it to work- but it doesn't. 😞
Don't focus on the fact that i'm only returning the count of events... performance doesn't differ if i returned the raw events (which is ultimately what i want to do).... i'm just doing the count so i can make an apples-to-apples comparison.
So consider the following two searches over 15 minutes of data:
SEARCH # 1
|tstats summariesonly=true count from datamodel="Web" where Web.user="dmerritt"
The value returned was 25. The search itself took 2.676 seconds
SEARCH # 2
|from datamodel Web|search user=dmerritt|stats count
The value returned was 106. The search itself took 2 minutes, 14 seconds.
QUESTIONS:
1) Why the HUGE difference in performance?
2) Why is the result count different?
NOTE : Am running Splunk 7.1.5
... View more
- Tags:
- splunk-enterprise
12-11-2018
12:36 PM
I'll refine my question to try to drive out the answer I need... can someone give me a SPL formula I can use to self calculate the percent complete. If I can get the formula, that will tell me the information I am trying to discover. My problem is my data model is at 55% complete and heading south every day.
... View more
12-11-2018
12:34 PM
Thanks but that's not it. I have events for each of the 30 days in the data model.
... View more
12-05-2018
11:01 AM
If a an accelerated data model is 80% complete, what does that ACTUALLY mean? Does it mean I have 80% of the events? 80% of the time? 80% of the data?
Almost half my data models are not keeping up at even close to 100% but i can't figure out what is actually "missing" from the data models.
... View more
- Tags:
- splunk-enterprise
11-01-2018
03:47 PM
Note - this really only works well if there are less than 500 distinct values in each field.... but there is enough direction there you can work with the idea yourself.
... View more
11-01-2018
03:25 PM
SOLVED IT... here is my query i landed on:
|inputlookup historyOfHashes.csv
|append [
|inputlookup lookupFileToMonitor.csv
|fieldsummary
|eval foo=MD5(values)
|stats values(foo) AS foofoo
|nomv foofoo
| rex mode=sed field=foofoo "s/ //g"
|eval finalfoo=MD5(foofoo)
|eval hashTimestamp=now()
|convert ctime(hashTimestamp)
|fields hashTimestamp,finalfoo]
|outputlookup historyOfHashes.csv
... View more
09-06-2018
11:41 AM
Updated info: If I use an account with Administrator privileges, I get the full list- not just the 32- so it must be a permission thing somehow.
... View more
09-06-2018
10:03 AM
I reran it again.... even this simple query ONLY returns 32 indexes:
"| rest /services/data/indexes-extended | table title frozenTimePeriodInSecs"
... View more
09-06-2018
10:02 AM
Good call out on the count limit.... but i still only get 32. 😞
... View more
09-06-2018
10:01 AM
this is the exact scenario i am facing.
... View more
09-06-2018
06:49 AM
1 Karma
Technically, this is two questions in one with the goal of solving a single problem: I need an SPL query that returns ALL the indexes I can search and the associated retention time for each. Here is how far I've gotten:
| rest /services/data/indexes | eval yr = floor(frozenTimePeriodInSecs/86400/365)| eval dy = (frozenTimePeriodInSecs/86400) % 365 | eval ret = yr . " years, " . dy . " days" | stats list(splunk_server) list(frozenTimePeriodInSecs) list(ret) by title
The query above is very very close, but it only returns a subset of the indexes — technically, it only returns 32 index names to me, and I have many more than that. (Note- starting with "rest /services/admin/indexes ... " makes no difference either.
My second query is this:
| eventcount summarize=false index=* index=_* | dedup index | fields index
That will return all 250+ index names, but I can't seem to find anyway to get back to the retention period.
So my two questions are:
1) Why is the rest command only pulling a subset (<15%) of all indexes that are returned by the event count query?
2) How can I get a single query that gets to my goal to have a single SPL query that shows all 250+ indexes and their associated retention setting?
... View more
09-04-2018
10:28 PM
For the benefit of anyone else, here is the query I am using to determine total byte count (The result is total bytes in each column):
|inputlookup myLookupDefinition | foreach * [eval len_'<>'=len('<>')]|stats sum(len_*)
The benefit then is the addition or subtraction of a single byte will trigger an alert... but i do prefer a hash of the entire table. 😞
... View more
09-04-2018
10:21 PM
I have several critical lookup files that I want to monitor to determine if they are altered in ANY capacity (lookup editor, outputlookup command, command line, etc.)
One idea i had was to call something like the MD5 function on the ENTIRE lookup file but can't seem to do that. My current method at present is to calculate the length of every field and sum them all up for a total byte count. It wouldn't detect a net-zero change in total bytes, but absent a better solution, it may be my best hope.
Ideas?
... View more