All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See also: https://community.splunk.com/t5/Splunk-Search/How-to-use-the-concurrency-command-to-timechart-the-top-10/m-p/698332/highlight/true#M237145
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many ite... See more...
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many items are in $servers_entered$, but I found a case that failed.  This is what $servers_entered$ looks like. host_1, host_2, host_3, host_4, ..., host_n What I need is a way of counting how many entries are in $servers_entered$.  So far the commands I've tried have failed.  What would work? TIA, Joe
Thank you.  Was going about that all backwards.  
As I had a similar problem - to count the parallel/concurrent HTTP requests grouping by time and host (which means the active threads in each server), I provide my solution:     index=jira-prod so... See more...
As I had a similar problem - to count the parallel/concurrent HTTP requests grouping by time and host (which means the active threads in each server), I provide my solution:     index=jira-prod source="/opt/jira/logs/access_log*" | rex field=_raw "^(?<IP>\d+\.\d+\.\d+\.\d+) (?<REQUEST_ID>[0-9]+x[0-9]+x[0-9]+) (?<USER>\S+) \[.+\] \"(?<REQUEST>[A-Z]+ \S+)-? HTTP/1.1\" (?<STATUS>[0-9]+) (?<BYTES>[0-9]+) (?<TIME>[0-9]+) \"(?<REFERER>[^\"]+)\".*$" | eval DURATION=TIME/1000 | eval START_AT=floor(_time-DURATION) | eval END_AT=floor(_time) | eval IN_MOMENT=mvrange(START_AT,END_AT,1) | mvexpand IN_MOMENT | eval _time=strptime(""+IN_MOMENT,"%s") | chart count as COUNT, max(DURATION) as MAX_DURATION by _time, host     This is parsing a real log file of Atlassian JIRA where:  line 2 parses the JIRA access log and determines its elements, including the duration in milliseconds of the request. Note that the request is logged at the moment it is complete thus _time is the end time lies 3-5 calculate the duration in seconds, start second and end second line 6 fills in IN_MOMENT each of the seconds the request is active, having at least one value when the start second is equal to the end second line 7 duplicates the even for each of the seconds listed in IN_MOMENT, setting the event's IN_MOMENT field to the current second as a regular single value line 8 is more a hack - convert the IN_MOMENT from epoch number into a timestamp line 9 calculate as whatever statistics/chart/timechart needed grouping by _time and host This worked fine for me.
I'm having similar issue. Any fix yet?
Hi @jm_tesla  May i know if you have further questions?.. if no then, could you pls mark this post as resolved (so that it will move from unanswered to answered and i will get a solution authored as... See more...
Hi @jm_tesla  May i know if you have further questions?.. if no then, could you pls mark this post as resolved (so that it will move from unanswered to answered and i will get a solution authored as well thanks) Best Regards Sekar
I believe I simply needed to restart each instance after I deleted the users on it.
Have you tried removing '/bin' from JAVA_HOME and the config file?
Splunk does not delete individual events - it removes entire buckets when either the size or time limit is reached. When deleting by time, because the whole bucket is deleted, it's important that al... See more...
Splunk does not delete individual events - it removes entire buckets when either the size or time limit is reached. When deleting by time, because the whole bucket is deleted, it's important that all of the events in that bucket be old enough to delete.  If any event is too new then the bucket will not be touched.  Every bucket has two dates (for our purposes, anyway) associated with it - the start date (_time of the first event added) and the end date (_time of the last event added).  The end date is one that determines when the bucket can be deleted/frozen. I've seen sites where data is poorly onboarded and has _time values in the future - sometimes by years.  When that happens, the bucket will remain in the system until frozenTimePeriodInSecs after that future date passes.
Well, even if you use index time as _time, you still can extract and use event's time as a field. You can also use _indextime directly or even extract event time as an indexed field to use it fast. T... See more...
Well, even if you use index time as _time, you still can extract and use event's time as a field. You can also use _indextime directly or even extract event time as an indexed field to use it fast. There are several possibilities. It's just that by default Splunk works in a specific way. And I still think (and it's actually not connected to Splunk itself) that lack of proper time synchronization is an important issue for any monitoring ans security monitoring even more so. True, some SIEMs do have several separate time fields for any event but on the other hand they have very rigid parsing rules and once you have your data indexed, it's over. So each approach has its pros and cons. Splunk's bucketing by _time has one huuuuuge advantage - it speeds up searches by limiting whole buckets from being searched.
You could try using a marker rather than a filler - otherwise, this looks like defect and should be raised with your Splunk support team.
Thanks @ITWhisperer .  That is the solution.
You can make the meta-data from the data source visible, then set up another search to use the meta-data tokens, such as resultCount for the search for the single value. "ds_tOBtSQ7e": { ... See more...
You can make the meta-data from the data source visible, then set up another search to use the meta-data tokens, such as resultCount for the search for the single value. "ds_tOBtSQ7e": { "type": "ds.search", "options": { "query": "index=_internal\n| stats count by sourcetype", "enableSmartSources": true, "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1" }, "ds_aRrJ4C9T": { "type": "ds.search", "options": { "query": "| makeresults\n| fields - _time\n| eval count=$Search_1:job.resultCount$", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_3" }
host="my.local" source="file_source.csv" sourcetype="csv" | rex field=Source_Directory "\\\\([^\\\\]+\\\\){3}(?<src_folder>[^\\\\]+)" | rex field=Destination_Directory "\\\\([^\\\\]+\\\\){3}(?<dest... See more...
host="my.local" source="file_source.csv" sourcetype="csv" | rex field=Source_Directory "\\\\([^\\\\]+\\\\){3}(?<src_folder>[^\\\\]+)" | rex field=Destination_Directory "\\\\([^\\\\]+\\\\){3}(?<dest_folder>[^\\\\]+)" | eval status = if(src_folder = dest_folder, "Same", "Different") | table status, Source_Directory, Destination_Directory
| rex max_match=0 "(?m)^\t\t+(?<Group_name>.+)$"
Can you provide feedback on Rich's suggestion: Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time that is newer than the frozenTimePeriodInSe... See more...
Can you provide feedback on Rich's suggestion: Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time that is newer than the frozenTimePeriodInSecs setting.  Buckets will not age out until *all* of the events in the bucket are old enough.
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specifi... See more...
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specific fields used:      Source_Directory and Destination_Directory I am trying to compare the two going 3 folders deep in the file path but running into issue when performing my rex command.  Preset folder structure is: "\\my.local\d\p\" pulled from the data set used.  Within the folder "\p\", there are various folder names.  Need to eval if a folder path is different beyond the preset path of "\\my.local\d\p\..." I put in bold what a discrepancy would if there is one.  Example data in CSV:   Source_Directory                                                    Destination_Directory      \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2      \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\      \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 Output query I am trying to create    Status           Source_Directory                                                    Destination_Directory     Same             \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder     Same             \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2     Different        \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\     Same             \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2     Different        \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 If folder name is different after the preset"\\my.local\d\p\" path I need that to show in the "Status" output.  I have searched extensively on how to use this rex command in this instance with no luck so thought I would post my issue.  Here is the search I have been trying to use.  Splunk Search host="my.local" source="file_source.csv" sourcetype="csv" | eval src_dir = Source_Directory | eval des_dir = Destination_Directory | rex src_path = src_dir "(?<path>.*)\\\\\w*\.\w+$" | rex des_path= des_dir "(?<path>.*)\\\\\w*\.\w+$" | eval status = if (src_path = des_path, "Same", "Diffrent") | table status, Source_Directory, Destination_Directory Any assistance would be much appreciated.
Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/4704... See more...
Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/470417 following seems to be working to extract Group_name, but capture doesn't stop once the group list ends. Instead, it continues to match everything till end of line. I experimented with (?ms) and (?m) but didnt have any succes.        "(?ms)(?:^Group Membership:\t\t\t|\G(?!^))\r?\n[\t ]*(?:[^\\\r\n]*\\\)*(?<Group_name>(.+))"           09/04/2024 11:59:59 PM LogName=Security EventCode=4627 EventType=0 ComputerName=DCServer.domain.x.y SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=64222222324 Keywords=Audit Success TaskCategory=Group Membership OpCode=Info Message=Group membership information. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: DCServer$ Account Domain: Domain Logon ID: 0x1111 Logon Type: 3 New Logon: Security ID: Domain\Account Account Name: Account Account Domain: Domain Logon ID: 0x5023236 Event in sequence: 1 of 1 Group Membership: Domain\Group1 Group2 BUILTIN\Group3 BUILTIN\Group4 BUILTIN\Group5 BUILTIN\Group6 NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users Domain\Group7 The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on. This event is generated when the Audit Group Membership subcategory is configured. The Logon ID field can be used to correlate this event with the corresponding user logon event as well as to any other security audit events generated during this logon session.       When I use this regex, it does capture starting from the Group list but continues on till the end of event. How can I tell regex to stop matching once the group list ends? Also, this regex seems to be putting all groups as a single match. Is it possible to make it multi-valued, so that we can count total number of groups present in a given event, e.g. 9 groups in the event example above.   Thanks, ~Abhi
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works. ... See more...
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works.  I have been asked to add two single value fields.  The first is showing the number of servers in the $servers_entered$ field and that works.  The second is showing the number of servers in the table search.  There should be a way of linking that information, but I can't figure out how.  I could run the search again, but that is rather inefficient. How do you tie the search result count from a table search to a single value field? TIA, Joe
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this examp... See more...
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this example, I'd like the gauge to cap at 10,000 but it always shows 100.