All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a simple lookup file with two fields, user and host user                                host Bob                                   1 Dave                                  2 Karen          ... See more...
I have a simple lookup file with two fields, user and host user                                host Bob                                   1 Dave                                  2 Karen                                 x Sue                                     y I want exclude any results from my search where there is any combination of host AND user where they match any value from the lookup. For example, exclude any results where: the user is Bob and the host is either 1, 2, x or y the user is either Bob, Dave, Karen or Sue and the host is x  I'm playing with this search, which appears to work but unsure if there's a flaw in my logic, or if there's a better way to do it?     index=proxy sourcetype="proxy logs" user="*" NOT ([| inputlookup lookup.csv | fields user | format ] AND [| inputlookup lookup.csv | fields host | format ]) | stats c by username, host     Thanks in advance
index=main sourcetype=_json status="True" | stats count(status) as True by name | append [| search index=main sourcetype=json status="False" | stats count(status) as False by name]  | append [| searc... See more...
index=main sourcetype=_json status="True" | stats count(status) as True by name | append [| search index=main sourcetype=json status="False" | stats count(status) as False by name]  | append [| search index=main sourcetype=json status="*" | stats count(status) as Total by name] | stats sum(True) as True sum(False) as False sum(Total) as Total max(Performance) as Performance by name | eval Percentage=round(((True/Total)*100),0)  | fields  Percentage Is it possible to show trendline and if Percentage up or down compart to last month.
Hello Team, I have logs with the below pattern 08/31/2023 8:00:00:476 am ........ count=0 08/31/2023 8:00:00:376 am ........ process started 08/31/2023 8:00:00:376 am...... XXX Process I need th... See more...
Hello Team, I have logs with the below pattern 08/31/2023 8:00:00:476 am ........ count=0 08/31/2023 8:00:00:376 am ........ process started 08/31/2023 8:00:00:376 am...... XXX Process I need the process name and the count to be displayed together but I dont have any common values/names/strings to match them. I have 4 similar process and the count together in the logs..is there a way on how we can match them together. Any help is much appreciated.
The short term solution we went with was to update the deployer_lookups_push_mode to not update the lookups as part of the push. Then, any time we needed to update the lookups, we leveraged the looku... See more...
The short term solution we went with was to update the deployer_lookups_push_mode to not update the lookups as part of the push. Then, any time we needed to update the lookups, we leveraged the lookup editor app's API endpoint to make our updates (/services/data/lookup_edit/lookup_contents) as part of our ci/cd pipeline which we use to push updates to the cluster. There are example scripts posted online you should be able to find pretty easily in order to do this. It's not the most elegant solution, but it's getting us through until we can upgrade
Hi Actually you can but you shouldn't Just "play" with port numbers on conf files and use tar package and install those on separate directories. This is doable on lab env, but never do this on pr... See more...
Hi Actually you can but you shouldn't Just "play" with port numbers on conf files and use tar package and install those on separate directories. This is doable on lab env, but never do this on production. But you shouldn't start your Splunk career with this kind of hack. Just use enough VM's etc. r. Ismo
One small step for man...  I really hope that a fresh install will be reverted as well.
Hi, I'm using a splunk enterprise based in a docker image, the dashboard is getting all the default windows events  but isn't getting sysmon events, I've created the inputs.conf file in the local... See more...
Hi, I'm using a splunk enterprise based in a docker image, the dashboard is getting all the default windows events  but isn't getting sysmon events, I've created the inputs.conf file in the local directory, in that file i'm forwarding both "Microsoft-Windows-Windows Firewall With Advanced Security/Firewall" and "Microsoft-Windows-Windows-Sysmon/Operational" events, I see the Firewall events in the dashboard and see that as a source but I don't get any of the sysmon events and it doesn't show up as a source, I've confirmed that the events are in the event viewer on the client, I have installed the application "Splunk Add-on for Sysmon", and in another seperate splunk enterprise docker image I tried installing the "Microsoft Sysmon Add-on" application,  In the inputs.conf file I have tried (on different instances):  [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = false  or: [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 index = main renderXml = true or: [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = true none have worked, I have installed the universal forwarder both manually and using the command line to rule out the quite install, I have even tried giving the forwarder service full admin rights to rule out issues accessing the logs , but I am still not getting any sysmon events in the dashboard, what am I missing?  
Hi you should just enable that input on your linux box inputs.conf. BUT there is issue if you are using other than local accounts on /etc/passwd (like AD or ldap authentication). This passwd.sh rea... See more...
Hi you should just enable that input on your linux box inputs.conf. BUT there is issue if you are using other than local accounts on /etc/passwd (like AD or ldap authentication). This passwd.sh reads only events on /etc/passwd file nothing else. It's quite common that most of users are on LDAP or AD and they are authenticated and authorise against those directories. Then there is no information of those users on local server. Probably most linux shops (more than couple of servers) do it that way. Unfortunately Splunk_TA_nix didn't support currently anything else than local accounts. Basically you could try to create a new check like passwd_getent.sh which is copy from passwd.sh with next modification. CMD='eval date ; eval LD_LIBRARY_PATH=$SPLUNK_HOME/lib $SPLUNK_HOME/bin/openssl sha256 $PASSWD_FILE ; cat $PASSWD_FILE' ====> CMD='eval date ; eval LD_LIBRARY_PATH=$SPLUNK_HOME/lib $SPLUNK_HOME/bin/openssl sha256 $PASSWD_FILE ; getent passwd' This should work in most recent linux versions, but unfortunately I haven't suitable environment test it now. Of course you need to fix check also on inputs.conf too. r. Ismo
Actually the event I am interest in are not returned by this search. I noticed that this search is returning as event the query that I am doing.  Maybe I should search a different index.
Thank you so much, it's working.
Are the events you are interested in returned by this search? Do you need to filter further e.g.  index=_internal dbconnect exception
Assuming there is a space between the words, and not a new line character, try this | rex "statement:(?<firstwords>\w+( \w+)?)"
Splunk is logging something in the index=_internal. For instance if I run "index=_internal dbconnect" I see something but I am not sure that this is the exact query.
What do you mean by "throughput limit"?  The UF has a rate limit which defaults to 256kbps.  The UF will read data at that rate until it catches up (if ever), but it will not stop reading. Tell us m... See more...
What do you mean by "throughput limit"?  The UF has a rate limit which defaults to 256kbps.  The UF will read data at that rate until it catches up (if ever), but it will not stop reading. Tell us more about the symptoms so we can offer suggestions.
In other event, the field statement can be : RESTORE VERIFYONLY FROM DISK = 'I:\toto.bak' In this case, in need to get statement=RESTORE VERIFYONLY (the first 2 words)
On MC (monitoring console) is own dashboard to show license usage. There are some selection by which you can see values. Just go Settings -> Monitoring Console  Indexing -> License Usage -> Histori... See more...
On MC (monitoring console) is own dashboard to show license usage. There are some selection by which you can see values. Just go Settings -> Monitoring Console  Indexing -> License Usage -> Historic License Usage then Split By: By Index Otherwise if you have all in one server you could check this also from Settings -> Licensing Usage Report Previous 60 days Split by: index   Those both shows by N (10?) biggest indexes. If you want to check some specific index then just copy that query by opening it from magnify glass. Then modify it something like index=_internal idx=<YOUR INDEX NAME> [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [ search index=_internal idx=<YOUR INDEX NAME> [ `set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] r. Ismo
My first answer said how to create a table with events of the same job from both indexes, but then you said you don't want a table.
Thank you for your answer, unfortunately i tested your answer on my real data that are quiet complex than the one I gave. I need to work on  statement included in a xml field from sys event log. So... See more...
Thank you for your answer, unfortunately i tested your answer on my real data that are quiet complex than the one I gave. I need to work on  statement included in a xml field from sys event log. Some statements have only one word and other have more than two words. Statement is delimited by a carriage return in the event. So the search | rex ".*statement:(?<statement>\w+(\s\w+)?)" In the event below  the statement field returned is sp_addlinkedsrvlogin additional_information The word after sp_addlinkedsrvlogin is on the next line, so it's not what i expect. In this case, i just want sp_addlinkedsrvlogin Please find the complete event above.   Regards, Tchounga <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='MSSQL$MWPBZAS1$AUDIT'/><EventID Qualifiers='16384'>33205</EventID><Level>0</Level><Task>3</Task><Keywords>0x80a0000000000000</Keywords><TimeCreated SystemTime='2023-08-31T04:30:01.964529800Z'/><EventRecordID>134063208</EventRecordID><Channel>Security</Channel><Computer>swpcfrbza354.cib.net</Computer><Security UserID='S-1-5-21-2847098101-2387550839-3588296759-1127899'/></System><EventData><Data>audit_schema_version:1 event_time:2023-08-31 04:30:00.9332742 sequence_number:1 action_id:CR succeeded:true is_column_permission:false session_id:53 server_principal_id:272 database_principal_id:1 target_server_principal_id:0 target_database_principal_id:0 object_id:0 user_defined_event_id:0 transaction_id:5417128 class_type:SL duration_milliseconds:0 response_rows:0 affected_rows:0 client_ip:100.83.120.237 permission_bitmask:00000000000000000000000000000000 sequence_group_id:93E8A6AF-640E-4EC2-B401-76F0ED6957A9 session_server_principal_name:CIB\ipcb3proc-sqlag-bd4 server_principal_name:CIB\ipcb3proc-sqlag-bd4 server_principal_sid:010500000000000515000000f544b3a977224f8e3710e1d5dc351100 database_principal_name:dbo target_server_principal_name: target_server_principal_sid: target_database_principal_name: server_instance_name:SWPCFRBZA354\MWPBZAS1 database_name:master schema_name: object_name:LSuser statement:sp_addlinkedsrvlogin additional_information:&lt;action_info xmlns="http://schemas.microsoft.com/sqlserver/2008/sqlaudit_data"&gt;&lt;server_name&gt;&lt;![CDATA[SWPDFRSQLADM1\MWPADM01]]&gt;&lt;/server_name&gt;&lt;/action_info&gt; user_defined_information: application_name:SQLAgent - TSQL JobStep (Job 0x451A71BE3BB91D4DBF2A1A6C12446006 : Step 1) </Data></EventData></Event>
Try brackets around the field name - [abc:def]
How to see daily licensing usage of 1 index in Splunk.