All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi In the end, I used this however it was not clear to me why I did not need to reference the newly created X_Mr. I could go straight to MR source="trace_Marketing_Bench_31032016_17_cff762901d1eff0... See more...
Hi In the end, I used this however it was not clear to me why I did not need to reference the newly created X_Mr. I could go straight to MR source="trace_Marketing_Bench_31032016_17_cff762901d1eff01766119738a9218e2*.jsonl" host="TEST2" index="murex_logs" sourcetype="Market_Risk_DT" "**strategy**" 920e1021406277a9 | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value.stringValue') | stats values(X_*) as * by _time | stats sum(mr_batch_load_cpu_time) as batch_load_cpu_time sum(mr_batch_load_time) as batch_load_time sum(mr_batch_compute_time) as mr_batch_compute_time sum(mr_batch_compute_cpu_time) as mr_batch_compute_cpu_time by mr_strategy  This created the below table that I was looking to do What I don't understand is at this point I can only see the new fields X_mr I added in a "| stats values(X_*) as * by _time" and we are back to the original - I don't get that.  
This is a feature that is a required. Tried several steps, nothing worked. Was anyone able to achieve this?
The StatusMsg field is being created on the fly, but it has to come from *somewhere*.  The OP has a list of possible messages, but there is no indication of when each is used. <<some expression>> re... See more...
The StatusMsg field is being created on the fly, but it has to come from *somewhere*.  The OP has a list of possible messages, but there is no indication of when each is used. <<some expression>> refers to a Boolean check that decides when to set StatusMsg to a specific string.  The expression probably will need to test the values of other fields (perhaps Host and/or ConnName).  You know your data better than I do so I can't be more detailed than that.
Search for all your users then extract the CN using the rex. If you are trying the tighten your search criteria, here is the spec for searches https://datatracker.ietf.org/doc/html/rfc2254
StatusMsg is the field (on the fly field) that I want to be populated by the message so I'm not certain what you mean by  <<some expression>> So that was why I thought maybe this would be an if the... See more...
StatusMsg is the field (on the fly field) that I want to be populated by the message so I'm not certain what you mean by  <<some expression>> So that was why I thought maybe this would be an if then type of query.  If StatusMsg="some value" then put that in the table along with the other data.  If not, then go to the next status message.  So I would want: Action                                                  Host           ConnName "Task through an uncaught..."     lx.......           CCNBU---- So should this be an if then search?
if this helps, this is how we currently list the members of a specific AD Group ldapsearch search="(&(objectClass=user)(memberOf=CN=Schema Admins,OU=groups,DC=domain,DC=xxx))"
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following... See more...
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following error: ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint I have noticed the btree_index.dat and btree_records.dat in /opt/splunk/data/fishbucket/splunk_private_db are re-created every few seconds. From what I can tell, after they get get to a certain point, those files are copied into the corrupt directory and are deleted. It then starts all over. I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.     I tried a repair on the data files with the following command: splunk cmd btprobe -d /opt/splunk/data/fishbucket/splunk_private_db -r which returned the following output no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Do I have some corrupt data somewhere which is causing problems with my btree index data? How would I go about finding the cause of this problem?
You are almost there - assuming your field is _raw | rex "(?<groups>(?<=CN=)[^,]+)"
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand th... See more...
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand this errorAI web-client-content-script.js:2 Uncaught (in promise) Error: Access to storage is not allowed from this context.   On the Data Inventory page i get the following: Error! Received the following error: Description Message Error occurred while grabbing data_inventory_products   Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_Security_Essentials/bin/generateShowcaseInfo.py", line 473, in handle if row['stage'] in ['all-done', 'step-review', 'step-eventsize', 'step-volume', 'manualnodata']: KeyError: 'stage'   There are also a couple of apps: splunk_essentials_8_2 and splunk_essentials_9_0 are both enabled. Does anyone know how to fix this? Thanks!
Example table output would be something like User1  Schema Admins User2  Schema Admins User1  Enterprise Admins User3  Domain Admins      
Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: ... See more...
Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: We recently updated the expired credentials of Splunk user account in qualys,new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on is still not accepting them, and logs are not flowing.
The rex "grabs" the CN - is this what you want to search for? Please can you give examples of what you are trying to "grab" from these strings?
Update:  Thanks for all the help. I was able to get an assist from a colleague and wanted to provide his search incase it works for anyone else.   In a nutshell, the line rows=5, you can change t... See more...
Update:  Thanks for all the help. I was able to get an assist from a colleague and wanted to provide his search incase it works for anyone else.   In a nutshell, the line rows=5, you can change to whatever number you need, but in the output if there are more than 5 hosts it will show you ... at the bottom so you know there is more, the columns on the right show you the initial amount of hosts and the amount if truncated   | tstats values(host) as host where index=* by index | foreach host ``` This code will only show you the first n values in the values() command``` [ eval rows=5 | eval mvCountInitial=mvcount('<<FIELD>>') | eval <<FIELD>>=if(mvcount('<<FIELD>>')>$rows$, mvappend(mvindex('<<FIELD>>',0,$rows$-1),"..."), '<<FIELD>>') | eval mvCountTruncated=if(mvcount('<<FIELD>>')>$rows$,mvcount('<<FIELD>>')-1,mvcount('<<FIELD>>')) | fields - rows] | rename mvCountInitial as "Total Host Count" mvCountTruncated as Truncated  
Hello livehybrid Thanks you. This gives me exactly what I an looking for. 
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=... See more...
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=domain,DC=xxx CN=Enterprise Admins,OU=group1,OU=groups,DC=domain,DC=xxx CN=Domain Admins Admins,OU=group1,OU=groups,DC=domain,DC=xxx This rex (?<=CN=)[^,]+ will grab the group name but having trouble pulling this all together This needs to search any group we want to include by specific name and then table out a list of the users that are members of each group sorted by the group name  
30 times faster! I like it. That is great news. Thanks for letting me know! 
ES Cannot be installed in a cluster on Windows, it only supports standalone ES search heads. https://docs.splunk.com/Documentation/ES/latest/RN/Limitations
Thanks for that additinal paramater. Original query took 37 minutes, your suggestion brought it to 1 minute, amazing, thanks very much !
If you need to match the LKUP_DSN field in the subsearch with the DS_NAME field in the main search then LKUP_DSN must renamed to DS_NAME. source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 la... See more...
If you need to match the LKUP_DSN field in the subsearch with the DS_NAME field in the main search then LKUP_DSN must renamed to DS_NAME. source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 latest=now [search source="*DSN_LKUP.csv" index="idxmainframe" earliest=0 latest=now | rename LKUP_NAME as DS_NAME | fields DS_NAME | format ] | table TIMESTAMP, DS_NAME, JOBNAME
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense ... See more...
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense to write my own parsing logic. I didn't have a lot of luck finding things precisely relevant to my logs and I could have benefited from the concrete example I'll lay out below, so here it is, hopefully it will help others. This specific example is OUD logs, but it's relevant to any arbitrary _KEY_n, _VAL_n extraction in Splunk. Log format The ODL logs are mostly well structured, but the format is a little odd to work with. Most fields are bracketed [FIELD_1] The log starts with "header" fields which are fixed and contain single values The next section of the log has an arbitrary number of bracketed key value pairs: [KEY: VALUE] The final piece is a non-structured string that may also contain brackets and colons Below is a small sample of a few different types of logs: [2016-12-30T11:08:46.216-05:00] [ESSBASE0] [NOTIFICATION:16] [TCP-59] [TCP] [ecid: 1482887126970,0] [tid: 140198389143872] Connected from [::ffff:999.999.99.999] [2016-12-30T11:08:27.60-05:00] [ESSBASE0] [NOTIFICATION:16] [AGENT-1001] [AGENT] [ecid: 1482887126970,0] [tid: 140198073563456] Received client request: Clear Application/Database (from user [sampleuser@Native Directory]) [2016-12-30T11:08:24.302-05:00] [PLN3] [NOTIFICATION:16] [REQ-91] [REQ] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Received Command [SetAlias] from user [sampleuser@Native Directory] [2016-12-30T11:08:26.932-05:00] [PLN3] [NOTIFICATION:16] [SSE-82] [SSE] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [202] non-Dyn.Calc.Cache : [0] [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641549] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QL,0:1] [category: RES] [conn: 1285] [op: 0] [msgID: 1] [result: 0] [authDN: uid=redacted,ou=redacted,o=redacted,c=redacted] [etime: 0] BIND [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641548] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QK,0:1] [category: REQ] [conn: 1285] [op: 0] [msgID: 1] [bindType: SIMPLE] [dn: uid=redacted,ou=redacted,o=redacted,c=redacted] BIND Configuration files We opted to use the sourcetype "odl:oud". This allows future extension into other ODL formatted logs. props.conf Note the 3-part REPORT processing. This utilizes regexes in transforms.conf to process the headers, the key-value pairs, and the trailing message. This could then be extended to utilize portions of that extraction for other log types that fall under ODL format. [odl:oud] REPORT-oudparse = extractOUDheader, extractODLkv, extractOUDmessage  transforms.conf This was the piece I could have used some concrete examples of. These three report extractions allow a pretty flexible ingestion of all parts of the logs. The key here was separating the _KEY_1 _VAL_1 extraction into its own process with the REPEAT_MATCH flag set # extract fixed leading values from Oracle Unified Directory log message [extractOUDheader] REGEX = ^\[(?<timestamp>\d{4}[^\]]+)\] \[(?<organization_id>[^\]]+)\] \[(?<message_type>[^\]]+)\] \[(?<message_id>[^\]]+)\] \[(?<component>[^\]]+?)\] # extract N number of key-value pairs from Oracle Diagnostic Logging log body [extractODLkv] REGEX = \[(?<_KEY_1>[^:]+?): (?<_VAL_1>[^\]]+?)\] REPEAT_MATCH = true # extract trailing, arbitrary message text from Oracle Unified Directory log [extractOUDmessage] REGEX = \[[^:]+: [^\]]+\] (?<message>[^\[].*)$.    The final regex there looks for a preceding key-value pair, NOT followed by a new square bracket, with any arbitrary characters thereafter to end the line. I initially tried to make one regex to perform all of this, which doesn't really work without being very prescriptive in the structure. With an unknown number of key-value pairs, this is not ideal and this solution seemed much more Splunk-esque. I hope this helps someone else!