All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can make the meta-data from the data source visible, then set up another search to use the meta-data tokens, such as resultCount for the search for the single value. "ds_tOBtSQ7e": { ... See more...
You can make the meta-data from the data source visible, then set up another search to use the meta-data tokens, such as resultCount for the search for the single value. "ds_tOBtSQ7e": { "type": "ds.search", "options": { "query": "index=_internal\n| stats count by sourcetype", "enableSmartSources": true, "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1" }, "ds_aRrJ4C9T": { "type": "ds.search", "options": { "query": "| makeresults\n| fields - _time\n| eval count=$Search_1:job.resultCount$", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_3" }
host="my.local" source="file_source.csv" sourcetype="csv" | rex field=Source_Directory "\\\\([^\\\\]+\\\\){3}(?<src_folder>[^\\\\]+)" | rex field=Destination_Directory "\\\\([^\\\\]+\\\\){3}(?<dest... See more...
host="my.local" source="file_source.csv" sourcetype="csv" | rex field=Source_Directory "\\\\([^\\\\]+\\\\){3}(?<src_folder>[^\\\\]+)" | rex field=Destination_Directory "\\\\([^\\\\]+\\\\){3}(?<dest_folder>[^\\\\]+)" | eval status = if(src_folder = dest_folder, "Same", "Different") | table status, Source_Directory, Destination_Directory
| rex max_match=0 "(?m)^\t\t+(?<Group_name>.+)$"
Can you provide feedback on Rich's suggestion: Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time that is newer than the frozenTimePeriodInSe... See more...
Can you provide feedback on Rich's suggestion: Use the dbinspect command to examine your buckets.  Make sure the oldest ones don't have an earliest_time that is newer than the frozenTimePeriodInSecs setting.  Buckets will not age out until *all* of the events in the bucket are old enough.
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specifi... See more...
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specific fields used:      Source_Directory and Destination_Directory I am trying to compare the two going 3 folders deep in the file path but running into issue when performing my rex command.  Preset folder structure is: "\\my.local\d\p\" pulled from the data set used.  Within the folder "\p\", there are various folder names.  Need to eval if a folder path is different beyond the preset path of "\\my.local\d\p\..." I put in bold what a discrepancy would if there is one.  Example data in CSV:   Source_Directory                                                    Destination_Directory      \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2      \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\      \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 Output query I am trying to create    Status           Source_Directory                                                    Destination_Directory     Same             \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder     Same             \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2     Different        \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\     Same             \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2     Different        \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 If folder name is different after the preset"\\my.local\d\p\" path I need that to show in the "Status" output.  I have searched extensively on how to use this rex command in this instance with no luck so thought I would post my issue.  Here is the search I have been trying to use.  Splunk Search host="my.local" source="file_source.csv" sourcetype="csv" | eval src_dir = Source_Directory | eval des_dir = Destination_Directory | rex src_path = src_dir "(?<path>.*)\\\\\w*\.\w+$" | rex des_path= des_dir "(?<path>.*)\\\\\w*\.\w+$" | eval status = if (src_path = des_path, "Same", "Diffrent") | table status, Source_Directory, Destination_Directory Any assistance would be much appreciated.
Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/4704... See more...
Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/470417 following seems to be working to extract Group_name, but capture doesn't stop once the group list ends. Instead, it continues to match everything till end of line. I experimented with (?ms) and (?m) but didnt have any succes.        "(?ms)(?:^Group Membership:\t\t\t|\G(?!^))\r?\n[\t ]*(?:[^\\\r\n]*\\\)*(?<Group_name>(.+))"           09/04/2024 11:59:59 PM LogName=Security EventCode=4627 EventType=0 ComputerName=DCServer.domain.x.y SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=64222222324 Keywords=Audit Success TaskCategory=Group Membership OpCode=Info Message=Group membership information. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: DCServer$ Account Domain: Domain Logon ID: 0x1111 Logon Type: 3 New Logon: Security ID: Domain\Account Account Name: Account Account Domain: Domain Logon ID: 0x5023236 Event in sequence: 1 of 1 Group Membership: Domain\Group1 Group2 BUILTIN\Group3 BUILTIN\Group4 BUILTIN\Group5 BUILTIN\Group6 NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users Domain\Group7 The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on. This event is generated when the Audit Group Membership subcategory is configured. The Logon ID field can be used to correlate this event with the corresponding user logon event as well as to any other security audit events generated during this logon session.       When I use this regex, it does capture starting from the Group list but continues on till the end of event. How can I tell regex to stop matching once the group list ends? Also, this regex seems to be putting all groups as a single match. Is it possible to make it multi-valued, so that we can count total number of groups present in a given event, e.g. 9 groups in the event example above.   Thanks, ~Abhi
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works. ... See more...
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works.  I have been asked to add two single value fields.  The first is showing the number of servers in the $servers_entered$ field and that works.  The second is showing the number of servers in the table search.  There should be a way of linking that information, but I can't figure out how.  I could run the search again, but that is rather inefficient. How do you tie the search result count from a table search to a single value field? TIA, Joe
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this examp... See more...
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this example, I'd like the gauge to cap at 10,000 but it always shows 100.   
Rich, thanks for the clarification. The Splunk documentation is kinda confusing on this specific topic.  That is helpful, frustraiting and leaves me with even more questions. Now I have absolutely n... See more...
Rich, thanks for the clarification. The Splunk documentation is kinda confusing on this specific topic.  That is helpful, frustraiting and leaves me with even more questions. Now I have absolutely no idea why we have logs 3 years older than the retentions are set to. There is nothing set up to freeze anything so it should all be rolling out as it hits that 5.1 year mark. Would homePath.maxDataSizeMB override? I thought that was the cut off to roll warm to cold and shouldn't affect it.  The only limits set are: maxTotalDataSizeMB = 1000000000 #1,000TB homePath.maxDataSizeMB = 500000 #500GB frozenTimePeriodInSecs = 160833600 (5.1 years) maxDataSize = 2000 maxWarmDBCount = 2000
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for ... See more...
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for a way to get JIRA->Splunk data in whenever there is a change in the issue or just able to query all the issues in JIRA via splunk and pull back stats 
Try something like this | metadata type=sources where index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file
When you've got ES control, but have to file a ticket that will take months to respond to from a Splunk core admin team for data issues, sometimes you just do what you gotta do.
@richgalloway  I checked the dbx_settings.conf file and I see the right java path in the file but I am still seeing the same error.  I have tried reinstalling the dbconnect app with no luck.  ... See more...
@richgalloway  I checked the dbx_settings.conf file and I see the right java path in the file but I am still seeing the same error.  I have tried reinstalling the dbconnect app with no luck.  also tried searching for the string "/bin/bin" in the splunk db_connect app path with no luck showing the string in any file.     
@N_K  You can make an action block loop through a list of parameters with the right input from a format block. With the HTTP app it may be harder to do as there are a lot of potential parameters.  ... See more...
@N_K  You can make an action block loop through a list of parameters with the right input from a format block. With the HTTP app it may be harder to do as there are a lot of potential parameters.  Yeah, please don't try to use requests outside of an app space   Depending what you are using the HTTP app for it may be best to build an app to handle it as you get a lot more control over the behaviour and the HTTP app, IMO, is usually only useful to test interactions with external APIs OR simple HTTP related tasks.  How many parameters are dynamic when using the HTTP app? 
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it t... See more...
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it takes 30-60 seconds to generate it.   Do you have an idea how to simplify it ? Or write in more efficient way ?  
@phanTom Thanks for the reply. Unfortunately the input playbook contains a http app block. I've tried to just make the request in a code block using requests but am running into proxy errors, works f... See more...
@phanTom Thanks for the reply. Unfortunately the input playbook contains a http app block. I've tried to just make the request in a code block using requests but am running into proxy errors, works fine when I use the app.
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, ... See more...
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, use the show kvstore-status command: ./splunk show kvstore-status When I run this command, it's asking me for a splunk username and password.  this was handed over by a project team, but nothing was handed over about what the splunk password might be, or also if we actually  use a KV store.  I've tried the admin password, but that's not worked. I've found some splunk documents advising the KV store config would be in $SPLUNK_HOME/etc/system/local/server.conf, under [kvstore] There is nothing in our server.conf under kvstore. I've also found some notes talking about KVStore not starting if there's a $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock file present We have 2 splunk servers - one of these has a lock file dated Oct 2022, and the other dated July 19th.  So based on this, I suspect it's not used otherwise we'd have hit issues with it before? That's just a guess, but this is my first foray into splunk, so I thought I'd ask if, based on the above scenarios whether I need to back up the KV store or not, or are there any other checks to confirm definitively if we have a KV store that's used? thanks in advance  
"_time is _the_ most important field " is precisely why we don't want to use the DATETIME_CONFIG=current solution. We are still using the _time, would be nice to use it together with _indextime. We ... See more...
"_time is _the_ most important field " is precisely why we don't want to use the DATETIME_CONFIG=current solution. We are still using the _time, would be nice to use it together with _indextime. We are operating at a scale too large to be fixing clocks.  When a misconfiguration is intended, we first have to catch it. We have to "account for lagging sources with our searches", which means very large time windows. Plus missing data in case of outages, so have to replay those searches to cover the outage timeframes. In any case, we are used to Splunk products being restrictive and making a lot of assumptions on how the customers should use it. We are working around exactly as you described it, just would be nice to have more options.
Only certain sourcetypes supported by the TA map to CIM datamodels.  The list is at https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Sourcetypes If you don't see what you need then yo... See more...
Only certain sourcetypes supported by the TA map to CIM datamodels.  The list is at https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Sourcetypes If you don't see what you need then you may need to add local aliases, etc. to the TA.
A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk ha... See more...
A few questions: Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs? Looking at the _internal logs, do you see that Splunk has ingested them? Can you do a search for a string that exists in your logs across all you indexes and find any responsive logs, in the time you verified that the data was ingested?   Also, for syslog data in general it is simpler and more durable to forward data to a syslog server and have a UF monitor relevant files and then you set-up monitoring stanzas per host/data source. [monitor://var/log...whatever] whitelist = regex blacklist = regex host_segment = as needed crcSalt = <SOURCE> {as needed} sourcetype = syslog {or whatever you want} index = yourIndex   Consult also: How the Splunk platform handles syslog data over the UDP network protocol - Splunk Documentation