All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: ... See more...
Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: We recently updated the expired credentials of Splunk user account in qualys,new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on is still not accepting them, and logs are not flowing.
The rex "grabs" the CN - is this what you want to search for? Please can you give examples of what you are trying to "grab" from these strings?
Update:  Thanks for all the help. I was able to get an assist from a colleague and wanted to provide his search incase it works for anyone else.   In a nutshell, the line rows=5, you can change t... See more...
Update:  Thanks for all the help. I was able to get an assist from a colleague and wanted to provide his search incase it works for anyone else.   In a nutshell, the line rows=5, you can change to whatever number you need, but in the output if there are more than 5 hosts it will show you ... at the bottom so you know there is more, the columns on the right show you the initial amount of hosts and the amount if truncated   | tstats values(host) as host where index=* by index | foreach host ``` This code will only show you the first n values in the values() command``` [ eval rows=5 | eval mvCountInitial=mvcount('<<FIELD>>') | eval <<FIELD>>=if(mvcount('<<FIELD>>')>$rows$, mvappend(mvindex('<<FIELD>>',0,$rows$-1),"..."), '<<FIELD>>') | eval mvCountTruncated=if(mvcount('<<FIELD>>')>$rows$,mvcount('<<FIELD>>')-1,mvcount('<<FIELD>>')) | fields - rows] | rename mvCountInitial as "Total Host Count" mvCountTruncated as Truncated  
Hello livehybrid Thanks you. This gives me exactly what I an looking for. 
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=... See more...
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=domain,DC=xxx CN=Enterprise Admins,OU=group1,OU=groups,DC=domain,DC=xxx CN=Domain Admins Admins,OU=group1,OU=groups,DC=domain,DC=xxx This rex (?<=CN=)[^,]+ will grab the group name but having trouble pulling this all together This needs to search any group we want to include by specific name and then table out a list of the users that are members of each group sorted by the group name  
30 times faster! I like it. That is great news. Thanks for letting me know! 
ES Cannot be installed in a cluster on Windows, it only supports standalone ES search heads. https://docs.splunk.com/Documentation/ES/latest/RN/Limitations
Thanks for that additinal paramater. Original query took 37 minutes, your suggestion brought it to 1 minute, amazing, thanks very much !
If you need to match the LKUP_DSN field in the subsearch with the DS_NAME field in the main search then LKUP_DSN must renamed to DS_NAME. source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 la... See more...
If you need to match the LKUP_DSN field in the subsearch with the DS_NAME field in the main search then LKUP_DSN must renamed to DS_NAME. source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 latest=now [search source="*DSN_LKUP.csv" index="idxmainframe" earliest=0 latest=now | rename LKUP_NAME as DS_NAME | fields DS_NAME | format ] | table TIMESTAMP, DS_NAME, JOBNAME
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense ... See more...
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense to write my own parsing logic. I didn't have a lot of luck finding things precisely relevant to my logs and I could have benefited from the concrete example I'll lay out below, so here it is, hopefully it will help others. This specific example is OUD logs, but it's relevant to any arbitrary _KEY_n, _VAL_n extraction in Splunk. Log format The ODL logs are mostly well structured, but the format is a little odd to work with. Most fields are bracketed [FIELD_1] The log starts with "header" fields which are fixed and contain single values The next section of the log has an arbitrary number of bracketed key value pairs: [KEY: VALUE] The final piece is a non-structured string that may also contain brackets and colons Below is a small sample of a few different types of logs: [2016-12-30T11:08:46.216-05:00] [ESSBASE0] [NOTIFICATION:16] [TCP-59] [TCP] [ecid: 1482887126970,0] [tid: 140198389143872] Connected from [::ffff:999.999.99.999] [2016-12-30T11:08:27.60-05:00] [ESSBASE0] [NOTIFICATION:16] [AGENT-1001] [AGENT] [ecid: 1482887126970,0] [tid: 140198073563456] Received client request: Clear Application/Database (from user [sampleuser@Native Directory]) [2016-12-30T11:08:24.302-05:00] [PLN3] [NOTIFICATION:16] [REQ-91] [REQ] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Received Command [SetAlias] from user [sampleuser@Native Directory] [2016-12-30T11:08:26.932-05:00] [PLN3] [NOTIFICATION:16] [SSE-82] [SSE] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [202] non-Dyn.Calc.Cache : [0] [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641549] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QL,0:1] [category: RES] [conn: 1285] [op: 0] [msgID: 1] [result: 0] [authDN: uid=redacted,ou=redacted,o=redacted,c=redacted] [etime: 0] BIND [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641548] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QK,0:1] [category: REQ] [conn: 1285] [op: 0] [msgID: 1] [bindType: SIMPLE] [dn: uid=redacted,ou=redacted,o=redacted,c=redacted] BIND Configuration files We opted to use the sourcetype "odl:oud". This allows future extension into other ODL formatted logs. props.conf Note the 3-part REPORT processing. This utilizes regexes in transforms.conf to process the headers, the key-value pairs, and the trailing message. This could then be extended to utilize portions of that extraction for other log types that fall under ODL format. [odl:oud] REPORT-oudparse = extractOUDheader, extractODLkv, extractOUDmessage  transforms.conf This was the piece I could have used some concrete examples of. These three report extractions allow a pretty flexible ingestion of all parts of the logs. The key here was separating the _KEY_1 _VAL_1 extraction into its own process with the REPEAT_MATCH flag set # extract fixed leading values from Oracle Unified Directory log message [extractOUDheader] REGEX = ^\[(?<timestamp>\d{4}[^\]]+)\] \[(?<organization_id>[^\]]+)\] \[(?<message_type>[^\]]+)\] \[(?<message_id>[^\]]+)\] \[(?<component>[^\]]+?)\] # extract N number of key-value pairs from Oracle Diagnostic Logging log body [extractODLkv] REGEX = \[(?<_KEY_1>[^:]+?): (?<_VAL_1>[^\]]+?)\] REPEAT_MATCH = true # extract trailing, arbitrary message text from Oracle Unified Directory log [extractOUDmessage] REGEX = \[[^:]+: [^\]]+\] (?<message>[^\[].*)$.    The final regex there looks for a preceding key-value pair, NOT followed by a new square bracket, with any arbitrary characters thereafter to end the line. I initially tried to make one regex to perform all of this, which doesn't really work without being very prescriptive in the structure. With an unknown number of key-value pairs, this is not ideal and this solution seemed much more Splunk-esque. I hope this helps someone else!
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain... See more...
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain the corresponding events from ACF2DS_Data.csv. The query provided below is not working as expected. Could you please assist me in resolving the issue with the query? source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 latest=now [search source="*DSN_LKUP.csv" index="idxmainframe" earliest=0 latest=now | eval LKUP_DSN = "%".LKUP_DSN."%" | where like(DS_NAME,LKUP_DSN) | table DS_NAME] | table TIMESTAMP, DS_NAME, JOBNAME Thanks, Ravikumar
    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                 ... See more...
    14/01/2025 15/01/2025 16/01/2025 17/01/2025 18/01/2025 19/01/2025 20/01/2025 21/01/2025 22/01/2025 05/01/2025                     06/01/2025                     07/01/2025                     08/01/2025                     09/01/2025                     10/01/2025 x                   11/01/2025 x                   12/01/2025 x                   13/01/2025 x                   14/01/2025                     15/01/2025                     16/01/2025 x                   17/01/2025                     18/01/2025 x                   19/01/2025 x                   20/01/2025 x                   21/01/2025 x                   22/01/2025                     Here is a simple table with dates and whether the user as accessed the account (marked with an 'x') Across the top are the date of when the report is run looking back 10 days including the day the report is run. What do you expect the count to be for each of those days? Do you expect a single count at the end for the whole period? What does that count represent and why? Please fill in all the detail.
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded ... See more...
If we follow the cycle of 10 as you said (N) and 4 as the number of days (M) (which is also the number of times, because the same person or department accessing an account on the same day is recorded as 1 day) Assuming that in the first 10 days of today, the same person and department accessed an account on the same day, and the number of visits per day is counted as 1, and the final value is greater than 4. To achieve this, if I enlarge the time interval, I will append the results of each 10 day period separately.
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After th... See more...
TLDR; check your file permissions.   A bit late to the party, but this one had me swearing for a a few hours. I work for an MSP, and manage several separate Splunk Enterprise environments. After the latest upgrade to 9.3.2, I noticed that for a few of them the DS was "broken".  Checked this post and others and went over configs side-by-side to see if there were any differences in outputs.conf, distsearch.conf, indexes.conf etc. There shouldn't really be many of those, since almost all config is done through puppet, and there are not so many reasons to change settings on an individual server basis. All types of internal logs are forwarded to the indexer cluster. Always.  Turns out that the upgrade process (ours or splunk's) had left /opt/splunk/var/log/client_events owned by root, with 700-permissions. No wonder that the files weren't even written to begin with... I suspect  on those environments that did work, I had out of habit run a chown -R splunk:splunk to ensure that I hadn't messed up something somewhere. Lesson: check the obvious stuff first.
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it w... See more...
Why is  Start time: 7, end time: 16, name, department, account number: 5 when you don't know on the 16th that this is the start of another set of 4 consecutive accesses? What would you get if it was accessed on 10, 11, 12, 13, 16, 18, 19, 20, 21?
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start ti... See more...
If the same person and department access the same account for 2 consecutive 4-day periods, they will receive: Start time: 5, End time: 14, Name, Department, Account, Number of occurrences: 4 Start time: 6, end time: 15, name, department, account number: 4 Start time: 7, end time: 16, name, department, account number: 5 Start time: 8, end time: 17, name, department, account number: 6 Start time: 9, end time: 18, name, department, account number: 7 Start time: 10, end time: 19, name, department, account number: 8 Start time: 11, end time: 20, name, department, account number: 7 Start time: 12, end time: 21, name, department, account number: 6 Starting time: 13, ending time: 22, name, department, account number: 5 Starting time: 14, ending time: 22, name, department, account number: 4 Start time: 15, end time: 23, name, department, account number: 4 Start time: 16, end time: 24, name, department, account number: 4 Because our data is collected today and yesterday, according to what you said, 10 is the cycle (N) and 4 is the number of days (M) (also the number of times, because the same person or department accessing one account on the same day is recorded as 1 day)
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several cl... See more...
One more comment which is mandatory to know. You cannot manage DS itself with DS functionality! Don't even try it!!! For that reason it's good to use dedicated DS server if/when you have several clients to manage. DS can be a physical or virtual server. It's no mater if there are enough resources for it. Currently you can even make pool of DSs as working like one. If you are Splunk Cloud customer you can order dedicated DS license from Support by creating a service ticket. I have never try if I can do this also as Splunk Enterprise customer too? After 9.2 there are some new configuration options what you must to do in DS especially if/when you are forwarding it's log to centralized indexers.
OK so what would you get if there were two periods of 4 consecutive days (10, 11, 12 and 13, and 16, 17, 18 and 19)?
Yes, what I want to achieve is to count the alarm results for every first 7 days plus 1 day
OK so the number of "visits" is because the 10 day periods 6-15, 7-16, 8-17, 9-18 and 10-19 all contain the same period of 4 consecutive visits (10, 11, 12 and 13)?