All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just use a couple of stats, first count the user numbers then create a new field with the user and count then re-stats with the values, e.g. | makeresults format=csv data="_time,user,src_ip 2025-08-... See more...
Just use a couple of stats, first count the user numbers then create a new field with the user and count then re-stats with the values, e.g. | makeresults format=csv data="_time,user,src_ip 2025-08-11,ronald,192.168.2.5 2025-08-11,jasmine,192.168.2.5 2025-08-11,tim,192.168.2.6 2025-08-11,ronald,192.168.2.5" ``` Like this ``` | stats count by user src_ip | eval user_count=user.":".count | stats values(user*) as values_user* by src_ip
@kn450  In your config, both (serverCert and privKeyPath both pointing to splunkWeb.pem). Is your splunkWeb.pem contains both private key and cert together? Its better to have privatekey and certif... See more...
@kn450  In your config, both (serverCert and privKeyPath both pointing to splunkWeb.pem). Is your splunkWeb.pem contains both private key and cert together? Its better to have privatekey and certificate separate. If your splunkWeb.pem contains both, you can use openssl command to split both. Ref #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/secure-splunk-platform-communications-with-transport-layer-security-certificates/configure-splunk-web-to-use-tls-certificates#id_2771b640_5e98_4545_bbfe_8444e86bc31d__Configure_Splunk_Web_to_use_TLS_certificates Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@kn450  For Splunk Web - #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/secure-splunk-platform-communications-with-transport-layer-security-certificates/confi... See more...
@kn450  For Splunk Web - #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/secure-splunk-platform-communications-with-transport-layer-security-certificates/configure-splunk-web-to-use-tls-certificates#id_2771b640_5e98_4545_bbfe_8444e86bc31d__Configure_Splunk_Web_to_use_TLS_certificates For Splunk indexing and forwarding - #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.4/secure-splunk-platform-communications-with-transport-layer-security-certificates/configure-splunk-indexing-and-forwarding-to-use-tls-certificates#f9f8584f_8df6_4b5b_a6c6_56b6e79dc0fe__Configure_Splunk_indexing_and_forwarding_to_use_TLS_certificates For inter-Splunk Comm. - #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/10.0/secure-splunk-platform-communications-with-transport-layer-security-certificates/configure-tls-certificates-for-inter-splunk-communication Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@RonaldCWWong  If I understood you correctly, you want to group by src_ip, list all the distinct user values per IP, and also count how many times each user appears for that IP. try below, ...your... See more...
@RonaldCWWong  If I understood you correctly, you want to group by src_ip, list all the distinct user values per IP, and also count how many times each user appears for that IP. try below, ...your base search... | stats count by src_ip, user | eventstats sum(count) as user_count by src_ip, user | eval user_count_pair = user . ":" . user_count | stats values(user) as values_user values(user_count_pair) as count_values_user by src_ip     Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi community, I have a question on counting the number of events per values() value in stats command. For example having events with src_ip, user (and a couple of more) fields. I would like to cou... See more...
Hi community, I have a question on counting the number of events per values() value in stats command. For example having events with src_ip, user (and a couple of more) fields. I would like to count each of the user occurence in the raw log. Example as below. | stats values(user) as values_user by src_ip Example: _time user src_ip 2025-08-11 ronald 192.168.2.5 2025-08-11 jasmine  192.168.2.5 2025-08-11 tim 192.168.2.6 2025-08-11 ronald 192.168.2.5   I would like to have result as  values_user count_vaules_user src_ip ronald jasmine ronald:2 jasmine:1 192.168.2.5 tim tim:1 192.168.2.6
@unclemoose  Are you seeing events with eventtype=account_locked? If not, Make sure eventtype is saved in a visible app and permissions are set to global. Regards, Prewin Splunk Enthusiast | Alwa... See more...
@unclemoose  Are you seeing events with eventtype=account_locked? If not, Make sure eventtype is saved in a visible app and permissions are set to global. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
I am trying to learn SIEM tech and am at the stage where im trying to use/setup Splunk CIM. My pipeline uses fake logs and I am trying to get them to show up with the Authentication data model. Howev... See more...
I am trying to learn SIEM tech and am at the stage where im trying to use/setup Splunk CIM. My pipeline uses fake logs and I am trying to get them to show up with the Authentication data model. However it seems like the authentication tag is not being applied.  (files shortened ) My eventtypes.conf: [account_locked] search = sourcetype="logstream" action="failure" signature="Account locked" tags = authentication, failure, account_locked My tags.conf: [eventtype=account_locked] authentication = enabled failure = enabled account_locked = enabled and my props.conf: [logstream] TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TIME_PREFIX = "\"_time\": \""" MAX_TIMESTAMP_LOOKAHEAD = 30 INDEXED_EXTRACTIONS = json FIELDALIAS-src_user_for_user = user AS src_user FIELDALIAS-src_for_src = src AS src FIELDALIAS-dest_for_dest = dest AS dest FIELDALIAS-app_for_app = app AS app FIELDALIAS-dest_for_dest = dest AS dest     Now what is really stumping me here is that no event types are being recognized. However, if I search for those logs by doing the command I used for the event type,  I get the results and logs I am looking for: search = sourcetype="logstream" action="failure" signature="Account locked" A couple of things I confirmed: - HEC token is correct - The Field Aliases are compliant with the  Authentication Data Model 
Please share with the community what was wrong in your case - it might help others in the future.
sorry, for late response my issue has been fixed, thanks for your replies
 I want to divide Scada_count/dda_count. This is my use case. You haven't answered the real question.  To quote myself: > However, your first search, if it has output, is dominated by | stats c... See more...
 I want to divide Scada_count/dda_count. This is my use case. You haven't answered the real question.  To quote myself: > However, your first search, if it has output, is dominated by | stats count as Scada_count by Area.  This means that if there are more than one values for Area, this search will have more than one row of Scada_count. (If there is ever only one value of Area, why bother group by Area?) On the other hand, the second search can ever only produce one row of dda_count.  What exactly do you expect append to achieve? Here is another quote: To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Instead of just illustrating all SPLs, the best way to obtain concrete help is to first illustrate data - in your case, sample output from svc_summary - even data mockups to illustrate your observed ... See more...
Instead of just illustrating all SPLs, the best way to obtain concrete help is to first illustrate data - in your case, sample output from svc_summary - even data mockups to illustrate your observed variations, then illustrate the results you want from illustrated data, then explain the logic between data and desired results.  Like @livehybrid said, summary index or _internal is not the problem.  The problem is that volunteers here have difficulty understand your business logic.
> The settings for TLS should be set the same way as they are on the management port. Does this mean that it needs to match the port specified in mgmt_uri in the [shclustering] stanza? > What do yo... See more...
> The settings for TLS should be set the same way as they are on the management port. Does this mean that it needs to match the port specified in mgmt_uri in the [shclustering] stanza? > What do you mean by "doesn't work"? > Remember that you need to have a working CA for mTLS to work. > Self-signed certs most probably won't work. The splunkd.log shows "useSSL=false," which goes against my intention. This log result suggests that it's set to non-SSL. I assumed that if communication was via mTLS, "useSSL=true" would be set. If it doesn't work with a self-signed certificate, I'll try this setting another time. Thank you for your advice.
I got the solutions. Thanks all the expertise for involving in this.  Issue was with the second search which was showing last 30 days only ( before 15th july only )
@uagraw01 Your screen capture is showing a green dot by the Job report; this means there is a message associated with the job. Click on the dropdown to reveal the message. What does it say?
@PickleRick Hi removed all the highlighted attributes from the query. But still I am not getting any results. (index=si_error source=scada (error_status=CAME_IN OR error_status=WENT_OUT) (_time=... See more...
@PickleRick Hi removed all the highlighted attributes from the query. But still I am not getting any results. (index=si_error source=scada (error_status=CAME_IN OR error_status=WENT_OUT) (_time=Null OR NOT virtual)) | fields - _raw | fields + area, zone, equipment, element, isc_id, error, error_status, start_time | search (area="*"), (zone="*"), (equipment="*"), (isc_id="*") | eval _time=exact(if(isnull(start_time),'_time',max(start_time,earliest_epoch))) | dedup isc_id error _time | fields - _virtual_, _cd_ | fillnull value="" element | sort 0 -_time -"_indextime" | streamstats window=2 global=false current=true earliest(_time) AS start latest(_time) AS stop, count AS count by area zone equipment element error | search error_status=CAME_IN | lookup isc id AS isc_id OUTPUTNEW statistical_subject mark_code | lookup new_ctcl_21_07.csv JoinedAttempt1 AS statistical_subject, mis_address AS error OUTPUTNEW description, operational_rate, technical_rate, alarm_severity | fillnull value=0 technical_rate operational_rate | fillnull value="-" alarm_severity mark_code | eval description=coalesce(description,("Unknown text for error number " . error)), error_description=((error . "-") . description), location=((mark_code . "-") . isc_id), stop=if((count == 1),null,stop), start=exact(coalesce(start_time,'_time')), start_window=max(start,earliest_epoch), stop_window=min(stop,if((latest_epoch > now()),now(),latest_epoch)), duration=round(exact((stop_window - start_window)),3) | fields + start, error_description, isc_id, duration, stop, mark_code, technical_rate, operational_rate, alarm_severity , area, zone, equipment | dedup isc_id error_description start | sort 0 start isc_id error_description asc | search technical_rate>* AND operational_rate>* (alarm_severity="*") (mark_code="*") | rename isc_id as Location, mark_code as "Mark code", technical_rate as "Technical %", operational_rate as "Operational %", alarm_severity as Severity | lookup mordc_Av_full_assets.csv Area as area, Zone as zone, Section as equipment output TopoID | lookup mordc_topo ID as TopoID output Description as Area | search Area="Depalletizing, Decanting" | stats count as Scada_count by Area | table Scada_count | appendcols [ search index=internal_statistics_1h [| inputlookup internal_statistics | where (step="Defoil and decanting" OR step="Defoil and depalletising") AND report="Throughput" AND level="step" AND measurement IN("Case") | fields id | rename id AS statistic_id] | eval value=coalesce(value, sum_value) | fields statistic_id value group_name location | eval _virtual_=if(isnull(virtual), "N", "Y"), _cd_=replace(_cd, ".*:", "") | sort 0 -_time _virtual_ -"_indextime" -_cd_ | dedup statistic_id _time group_name | fields - _virtual_ _cd_ | lookup internal_statistics id AS statistic_id OUTPUTNEW report level step measurement | stats sum(value) AS dda_count] Note : While executing both the queries individually. I a, getting the results.
Thank you! I wasn’t aware that we have access to the document object when writing JavaScript in Synthetic scripts—does the documentation explicitly mention this? If so, I’d love to have link and tak... See more...
Thank you! I wasn’t aware that we have access to the document object when writing JavaScript in Synthetic scripts—does the documentation explicitly mention this? If so, I’d love to have link and take a closer look. Here’s what worked for me: In my scenario, the parent element of the target has a static ID. Leveraging that, I crafted a JavaScript snippet that locates and returns the ID of the element containing the visible text the test needs to click. In the worst-case scenario, we could adapt the same logic to search across the entire page if needed.   Here is the JavaCode: // Get the parent element const parentId = 'contentSection' const searchText='sign up' const parentElement = document.getElementById(parentId); if (!parentElement) { console.warn(`Parent element with ID "${parentId}" not found.`); return null; } // Normalize search text for case-insensitive comparison const normalizedSearch = searchText.trim().toLowerCase(); // Search all descendant elements const childElements = parentElement.querySelectorAll('*'); for (let child of childElements) { if (child.textContent.trim().toLowerCase() === normalizedSearch) { return child.id || null; // Return null if the element has no ID } } // No match found return null;
@yuanliu I want to divide Scada_count/dda_count. This is my use case.
Hi @kn450 , Did you check in the Splunkd.logs, any errors you spotted in the logs. Please share the logs here for further troubleshooting.
Hi, We’re looking for guidance on the best way to ingest FortiMail Cloud logs into Splunk Cloud. Our current environment includes: Cloud: Splunk Cloud, Fortimail Cloud - Hosted On-premise: SC4S ... See more...
Hi, We’re looking for guidance on the best way to ingest FortiMail Cloud logs into Splunk Cloud. Our current environment includes: Cloud: Splunk Cloud, Fortimail Cloud - Hosted On-premise: SC4S serve, Heavy Forwarder and FortiAnalyzer on-prem   FortiMail Cloud is hosted by Fortinet, so we can’t just point it at our SC4S like we would for an on-prem appliance. We do have the option to send logs to our on-prem FortiAnalyzer, but we’re unsure if it’s better to: Route FortiMail Cloud logs → FortiAnalyzer on-prem → SC4S/HF → Splunk Cloud, Send FortiMail Cloud logs directly to SC4S via an external connection, or Use another recommended method (e.g., Fortinet APIs, log download scheduling, etc.) Has anyone implemented a similar setup for FortiMail Cloud? Any best practices or pitfalls to avoid—especially regarding secure transport, parsing, and CIM compliance? Thanks in advance!
Hi @kn450  Please could you look at the logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any errors that might indicate why it failed to start?  Did this answer help you? If so, please... See more...
Hi @kn450  Please could you look at the logs in $SPLUNK_HOME/var/log/splunk/splunkd.log - Are there any errors that might indicate why it failed to start?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing