All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  @anu1 ,the dashboard is very easy, but it requires a preparation that depends on the number of data sources that you want to display in this dashboard. In few words, you should: analyze your ... See more...
Hi  @anu1 ,the dashboard is very easy, but it requires a preparation that depends on the number of data sources that you want to display in this dashboard. In few words, you should: analyze your data sources and define the conditions for LOGIN, LOGOUT and LOGFAIL, eg, for Windows login is EventCode=4624, logout is EventCode=4634 and logfail is EventCode=4625, then create av eventtype for each condition assigning a tag (LOGIN, LOGOUT or LOGFAIL) to each eventtype, create some alias to have the same field names for the fields to display (e.g. UserName, IP_Source,  Hostname, etc...) create a dashboard running a search like the following: tag=$tag$ host=$host$ UserName=$user$ | table _time tag HostName UserName IP_Source the three tags in the main search come from three inputs. Let me know if you need help to create the dashboard that's very easy. Ciao. Giuseppe  
@MikeMakai Please run `tcpdump` to verify if the expected logs are being received. If the expected output is observed, we can proceed to check from the Splunk side. If this reply helps you, Karma w... See more...
@MikeMakai Please run `tcpdump` to verify if the expected logs are being received. If the expected output is observed, we can proceed to check from the Splunk side. If this reply helps you, Karma would be appreciated.
@MikeMakai Could you share your `inputs.conf` file? Are you sending data directly from the FMC to Splunk, or is there an intermediate forwarder between your FMC and Splunk?
Please do not use screenshot to illustrate text data.  Use text table or text box.  But even the two index search screenshots are inconsistent, meaning there is no common dest_ip.  You cannot expect ... See more...
Please do not use screenshot to illustrate text data.  Use text table or text box.  But even the two index search screenshots are inconsistent, meaning there is no common dest_ip.  You cannot expect all fields to be populated when there is no matching field value.  This is basic mathematics. Like @bowesmana says, find a small number of events that you know have matching dest_ip in both indices, manually calculate what the output should be, then use the proposed searches on this small dataset. Here is a mock dataset losely based on your screenshots but WITH matching dest_ip src_zone src_ip dest_zone dest_ip transport dest_port app rule action session_end_reason packets_out packets_in src_translated_ip dvc_name index server_name ssl_cipher ssl_version trusted 10.80.110.8 untrusted 152.88.1.76 UDP 53 dns_base whatever1 blocked policy_deny 1 0 whateverNAT don'tmatter *firewall*             152.88.1.76                     *corelight* whatever2 idon'tcare TLSv3 The first row is from index=*firewall*, the second from *corelight*. Because your two searches operators on different indices, @gcusello 's search can also be expressed with append (as opposed to OR) without much penalty.  Like this   index="*firewall*" sourcetype=*traffic* src_ip=10.0.0.0/8 | append [search index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*] | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"   Using the mock dataset, the output is Destination Action Application DC Egress IP End Reason From Packets In Packets Out Port Protocol Rule SNI Source To 152.88.1.76 blocked dns_base don'tmatter whateverNAT policy_deny trusted 0 1 53 UDP whatever1 whatever2 10.80.110.8 untrusted This is a full emulation for you to play with and compare with real data   | makeresults format=csv data="src_zone, src_ip, dest_zone, dest_ip, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name trusted, 10.80.110.8, untrusted, 152.88.1.76, UDP, 53, dns_base, whatever1, blocked, policy_deny, 1, 0, whateverNAT, don'tmatter" | table src_zone, src_ip, dest_zone, dest_ip, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | eval index="*firewall*" ``` the above emulates index="*firewall*" sourcetype=*traffic* src_ip=10.0.0.0/8 ``` | append [makeresults format=csv data="server_name, dest_ip, ssl_version, ssl_cipher whatever2, 152.88.1.76, TLSv3, idon'tcare" | eval index="*corelight*" ``` the above emulates index=*corelight* sourcetype=*corelight* server_name=*microsoft.com* ```] | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"    
Please share the search so far and some sample data then we might be able to help you with the search query.
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker... See more...
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker, Customer, Host Name Either identify using probe data or Splunk Command metrics Output for the following metrics should be shown as a timegraph with # of logins, logouts , the graph should consists of time,which host and which customer we are using.and the query also should have the tokens when i ran the query can you give me the search query for this requirement.I used multiple queries but am not getting the exact data. Can you help me with the query.Thanks.
@richgalloway The perfect solution, exactly what I was looking for. Thank you
Thanks Kiran
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic event... See more...
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic events, like connection events, volume file events and malware events. When I try to learn more about these events it doesn't drill down into more info. For example, when I click on the 14 Malware Events and chose open in search it just shows the number of events. There is no information regarding these events. When I click on inspect, it shows command.tstats at13 and  command.tstats.execute_output at 1. It doesn't provide further clarity regarding the malware events. When I view the Malware files dashboard on the FMC, is shows no data for malware threats. So based on the FMC it seems that the data in the Splunk dashboard is incorrect or at least interpreting malware events differently from the FMC dashboard. 
Example 1 - you are using dedup src/dest - you can't do that as I explained in my other post. Example 2 - dedup here is not useful - you have a multivalue src_ip and you will not have any duplicate ... See more...
Example 1 - you are using dedup src/dest - you can't do that as I explained in my other post. Example 2 - dedup here is not useful - you have a multivalue src_ip and you will not have any duplicate src_ip in there relating to the dest_ip, so it's redundant. Best way to work out what's wrong here is to remove the last 2 lines and just let the stats work. If you work on a small time zone where you KNOW what data you expect - then you can more easily validate what's wrong. As @isoutamo says, you can even remove stats and just to table * - but work with a very small data set where the results are predictable. Then you can build back the detail again.
It's quotes - you are using * without quotes, so it's invalid eval - sometimes it's hard to debug eval token setters in dashboards, but generally if you find that nothing appears to happen when you h... See more...
It's quotes - you are using * without quotes, so it's invalid eval - sometimes it's hard to debug eval token setters in dashboards, but generally if you find that nothing appears to happen when you have an <eval> token setter, it's probably getting an error. I often have a panel behind depends="$debug_tokens$" that has an html panel that reports token values, which I can turn on as needed. Also on the dashboard examples app there is a showtokens.js which can help you uncover token issues.  
https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 this links contains those exact steps which are needed including remove old... See more...
https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 this links contains those exact steps which are needed including remove old peers from CM! As "splunk offline --enforce-counts" is not enough.
Hi gcusello, Thanks for your response and suggestions. 1- As yuanliu  said, please see samples in text format (using the Insert/Edit Code Sample button). 2- I put all the search terms in the main ... See more...
Hi gcusello, Thanks for your response and suggestions. 1- As yuanliu  said, please see samples in text format (using the Insert/Edit Code Sample button). 2- I put all the search terms in the main search like you said as below:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*"   Sample results:   {"endpoint":"/requesttoken","clientHost":"localhost:36976","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T11:27:02.881-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:53964","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"login succeed","timestamp":"2024-12-30T15:47:15.496-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:33.226-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/authtoken","clientHost":"localhost:54552","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T15:47:35.36-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/userdefinedtokenfunctions","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"getUDFs succeeded","timestamp":"2024-12-30T15:47:38.872-06:00","actionName":"getUDFs","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.71.65.228","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:39.214-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"exportJob succeeded","timestamp":"2024-12-30T15:47:39.292-06:00","actionName":"exportJob","status":"SUCCESS"}, {"endpoint":"/gsql/authtoken","clientHost":"localhost:54552","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-30T15:47:40.63-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:54556","userAgent":"GraphStudio","userName":"TEST1","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-30T15:47:41.877-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST2","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:00:19.455-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST2","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:03:22.203-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:47404","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST3","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T10:18:22.9-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST3","authType":"LDAP","message":"Authentication failed!","timestamp":"2024-12-31T10:25:32.26-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/requesttoken","clientHost":"localhost:35260","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2024-12-31T11:00:05.35-06:00","actionName":"requestToken","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST3","authType":"LDAP","message":"login succeed","timestamp":"2024-12-31T11:24:31.435-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:47318","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2024-12-31T21:15:55.995-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:38336","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-02T11:36:45.844-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.82.128.85","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"c089265","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-03T03:59:10.235-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:38012","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T13:47:43.429-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:27.717-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:32.587-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"/127.0.0.1:43520","failedAttempts":0,"userAgent":"GrpahStudio","userName":"TEST4","authType":"LDAP","message":"Authentication failed!","timestamp":"2025-01-06T13:48:36.03-06:00","actionName":"login","status":"FAILURE"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:60404","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST5","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T19:59:28.295-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.229","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST5","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T19:59:40.885-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"localhost:53886","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:45:36.492-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"10.138.170.165","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"showCatalog succeeded","timestamp":"2025-01-06T20:45:37.241-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"localhost:39154","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:46:48.666-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"10.138.170.165","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"showCatalog succeeded","timestamp":"2025-01-06T20:46:49.376-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/login","clientHost":"10.138.170.165","clientOSUsername":"TEST4","failedAttempts":0,"userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"login succeeded","timestamp":"2025-01-06T20:47:14.033-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully used graph 'aml_risk_graph'.","timestamp":"2025-01-06T20:47:14.863-06:00","actionName":"useGraph","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully created query 'Del_orphan_edges_for_previous_primary'.","timestamp":"2025-01-06T20:47:17.079-06:00","actionName":"createQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-06T20:55:21.895-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:43048","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-06T20:56:34.057-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"localhost:43048","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-06T20:56:34.781-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/file","clientHost":"localhost:39154","clientOSUsername":"TEST4","userAgent":"GSQL Shell","userName":"TEST4","authType":"LDAP","message":"Successfully installed query [Del_orphan_edges_for_previous_primary].","timestamp":"2025-01-06T20:57:36.229-06:00","actionName":"installQuery","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-06T20:57:46.93-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"localhost:60138","userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-06T20:57:47.563-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"localhost:48050","failedAttempts":0,"userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"login succeed","timestamp":"2025-01-07T08:56:29.476-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/gsql/schema","clientHost":"localhost:48036","userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"Successfully got schema for graph aml_risk_graph.","timestamp":"2025-01-07T08:56:38.165-06:00","actionName":"showCatalog","status":"SUCCESS"}, {"endpoint":"/gsql/queries","clientHost":"100.71.65.229","userAgent":"GraphStudio","userName":"sxvedag","authType":"LDAP","message":"showQuery succeeded","timestamp":"2025-01-07T08:56:38.823-06:00","actionName":"showQuery","status":"SUCCESS"}, {"endpoint":"/gsql/simpleauth","clientHost":"100.71.65.228","failedAttempts":0,"userAgent":"GraphStudio","userName":"TEST4","authType":"LDAP","message":"login succeed","timestamp":"2025-01-07T10:16:11.689-06:00","actionName":"login","status":"SUCCESS"}, {"endpoint":"/requesttoken","clientHost":"localhost:45058","userAgent":"Apache-HttpClient/5.2.3 (Java/17.0.13)","userName":"TEST6","authType":"LDAP","message":"Generate new token successfully.\nWarning: TEST6 Support cannot restore access to secrets/tokens for security reasons. Please save your secret/token and keep it safe and accessible.","timestamp":"2025-01-08T09:20:02.381-06:00","actionName":"requestToken","status":"SUCCESS"} ]   But here I don’t know why the filter is giving me information about "status":"SUCCESS" and "message":"login succeed" even my login failed condition is "message":"Authentication failed!" Maybe my condition in the search query is wrong. Also I was trying to get the results for the timestamp but it is showing all for other timestamp. 3- I suppose also  to check the condition for each host in our  infrastructure and each account by using my search below:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*"   4- I tried  to use the stats command to aggregate results and the where command to filter them, something like this:   index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* clientHost failedAttempts userAgent userName authType message timestamp actionName status "actionName":"login" "message":"Authentication failed" "status":"FAILURE" "timestamp":"2025-01-08T*" | stats count BY actionName host | where count>3   Results: No results found. Thank you for your help.
Maybe this can be solution to your challenge: https://community.splunk.com/t5/Deployment-Architecture/KVStore-does-not-start-when-running-Splunk-9-4/m-p/708304/thread-id/29016/highlight/false#M29017
Have you checked from mongodb.log why this is not starting? There is one another case where Windows OS was not supported by Splunk 9.4.0 version. https://community.splunk.com/t5/Splunk-Enterprise/KVs... See more...
Have you checked from mongodb.log why this is not starting? There is one another case where Windows OS was not supported by Splunk 9.4.0 version. https://community.splunk.com/t5/Splunk-Enterprise/KVstore-unable-to-start-after-upgrade-to-Splunk-Enterprise-9-4/m-p/708264#M21264
I did some reading of the documentation and realized that underlying Mongo DB was upgraded to 7. I figured out that Mongo DB 5+ requires AVX instruction set.  So time to check if CPU supports AVX in... See more...
I did some reading of the documentation and realized that underlying Mongo DB was upgraded to 7. I figured out that Mongo DB 5+ requires AVX instruction set.  So time to check if CPU supports AVX instruction set - in my case the CPU model did support this instructions. But running the lscpu command didnt show AVX flags. It turned out that AVX instructions were not available, because the VM had Processor compatibility mode enabled. In hyper-v we had to remove "Allow migration to a virtual machine host with a different processor version" checkbox.  After VM was restarted, AVX appeared in CPU flags and Splunk KV Store was operational. Lession learned: before upgrading  to 9.4 (or making fresh install), check if AVX flag is available. If it isn't, it is about time to upgrade your hardware and in stick to Splunk 9.3.
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installat... See more...
I am posting this to maybe save you from few hours of troubleshooting like I did. I did clean install of Splunk 9.4 in small customer environment with virtualized AIO instance. After the installation there was an error notifying that KV Store can not start and that mongo log should be checked. The following error was logged:   ERROR KVStoreConfigurationProvider [4755 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed.     Mongod.log was completely empty.  So there was no clues in the log files about what is wrong and what can I do to make KVStore operational. Time to start Googling. Solution will be posted in the next post.
How about ceil for diffTime? https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/MathematicalFunctions#ceiling.28.26lt.3Bnum.26gt.3B.29_or_ceil.28.26lt.3Bnum.26gt.3B.29
If you can do you can try to standardise your company log messages like Mandatory part Here are some fields/information that every event must contain, regardless of the service. It doesn’t matter i... See more...
If you can do you can try to standardise your company log messages like Mandatory part Here are some fields/information that every event must contain, regardless of the service. It doesn’t matter if those fields are KV, JSON, or just placement formatted. More important is that these are always present and easily identified in the log event.       Field   Content   Example   Purpose   timestamp RFC-3339 formatted timestamp 2024-07-01T12:13:15.123+03:00 When an event occurs in this service. log_type audit/apps/trace/metric audit What is the event type from a security content perspective? source_ip IP of logging systems / service 10.11.22.123 Where did the event occur from an IP perspective? source_system Source System of log event aa.bb.local Host or service where event was created. process Process / service which has processed event app_abc Which application / service processed the event. sessionId Session where event belongs 82B98B54-9553-43CD-A5AB-E6F45656CD95 e.g. GUID to identify the entire session. requestId Request where event belongs 9DF09DE7-4061-487B-953C-49B73C000E2C e.g. GUID to identify individual request within session. userId User’s identification on service. a12345 Pseudonyms should be used instead of real user IDs to avoid exposing PII. outcome Status of action Error Did the action succeed or fail? errorDetails Details for action result Not authorized A more detailed error message, including the full message, could be part of the service-based payload payload Application / service specific parts { “as”:23, { “aa”:”bb”, “cc”:12}} A separate payload based on the real audit trail needs.   In that way you could do some kind of DM based on this, but as I said usually the payload is the interesting part and this is different by every subsystem / service etc. And there are lot equipments which have their own log format.
Thank you for the input everyone.  @isoutamo - you are correct in that each data source I'm looking at has vastly different data available... Some sources come from endpoint agents which have userna... See more...
Thank you for the input everyone.  @isoutamo - you are correct in that each data source I'm looking at has vastly different data available... Some sources come from endpoint agents which have username, endpoint name, ip address (local/public), url, url ip, etc.  Other sources from network devices and might track users by local IP only, but also might have which FW the request goes through, etc.  I have one source which only lists a single field to identify the user.... the MAC address... really not helpful without an additional lookup. I ended up using a number of macros and lots of coalesces to make my field names consistent.