All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Note that if you have any days where there are no results, you will not get a datapoint for that day for that source, so it will affect the average. You can probably resolve that if that's an issue.
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, sou... See more...
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, source | stats sum(log_count) as sum_log by _time source | eventstats avg(sum_log) as avg_sum_log by source and then in your trellis give yourself an independent scale You seem to need the tstats AND stats to give yourself a trellis by source option.
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at ht... See more...
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Configuremulti-clustersearch let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Agree Guiseppe. I was thinking if anyone in the community have done similar >> if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and ... See more...
Agree Guiseppe. I was thinking if anyone in the community have done similar >> if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and configuring Search Heads, in each Cloud, to search in all the clusters. Creating SHC and replications is not the problem; but the key is for the end-user, they have to Search from a single Search Head cluster and it should search in ALL the clusters in multi-cloud. (This could be SHC on top of another SHCs in each cloud or preferably directly to indexers on each clouds if feasible) Org. doesn't need data to be replicated between clouds, but we can create clusters within each cloud provider for Replicator factor.
Try this index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | chart count by app date_month
Hi @JandrevdM , what's the issue in your search? it seems to be correct, even if I'd semplify it: index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | stats count by date_mo... See more...
Hi @JandrevdM , what's the issue in your search? it seems to be correct, even if I'd semplify it: index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | stats count by date_month app | rename count as "Number of Users" | table date_month app "Number of Users" If you have many data to analyze, you could schedure this search, e.g. every night with the events of the day, saving results in a summary index and searching on the summary index. Ciao. Giuseppe
Hi @koshyk , at first, this isn't a question for the Community, but for a Certified Splunk Architect or a Splunk Professional Services Specialist. Anyway, yes it's possible, but you should define o... See more...
Hi @koshyk , at first, this isn't a question for the Community, but for a Certified Splunk Architect or a Splunk Professional Services Specialist. Anyway, yes it's possible, but you should define one question: do you want to replicate data across the clouds or not? if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and configuring Search Heads, in each Cloud, to search in all the clusters. If not, you have to create a cluster in each Cloud and only configure Search Heads, in each Cloud, to search in all the clusters. Anyway, ths project requires the engagement of a specialist to analyze requirements and design the solution. Ciao. Giuseppe
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average... See more...
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three month... See more...
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three months display as columns with a count of amount of users. Example below Applications June(Month1) July(Month2) August(Month3) chatGPT 213 233 512   index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month ```| dedup user, app, date_month``` | stats count by date_month, app | sort date_month, app 0 | rename count as "Number of Users" | table date_month, app, "Number of Users"
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value... See more...
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hello @KendallW, Can you please help on a follow up question? In my case, I'm using HEC to get the logs in, the "source::" spec cannot distinguish the firewalls, can I use "host::" instead?
Thanks @richgalloway The solution worked .
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs... See more...
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs -  multi-cloud design for various applications. Application (and thus data) resides in AWS/Azure/GCP in multiple regions within Europe - Doesn't want to have lot of egress cost. So aggregating data into the cloud which Splunk was installed predominently is out of question. - The design is to have 'Data nodes' (Indexer clusters or Data clusters) in each of the application/data residing cloud providers - A Search Head cluster (Cross Cloud search) will be then spun in the main provider (eg AWS), which can then search ALL these remote 'Data nodes' Is this design feasible in Splunk? (I understand Mothership add-on, but my last encouter with it at enterprise scale was not that great) Looking for something like below with low latency
Yeah i got it, but is it possible to edit the HTML page? My goal is to change the URL so that when we click on the return to splunk, it takes us to customized URL @KendallW 
That error can have a few different causes. Check this post https://community.splunk.com/t5/Security/Why-does-Saml-response-not-contain-group-information/m-p/417494/highlight/true#M13278 
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have Does this conflict with the following? trauncated, the actual number of lines in JSON fo... See more...
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have Does this conflict with the following? trauncated, the actual number of lines in JSON format is around 959 Lines. So Is there any limit setting on the search head to analyze whole event? Could you elaborate, maybe with some real examples? (Anonymize as needed.)
index="_internal" source="*license_usage.log" type=RolloverSummary earliest=-30d@d latest=now | eval _time = _time - 43200 | bin _time span=1d | stats latest(b) AS b by slave,pool,_time | eval DailyG... See more...
index="_internal" source="*license_usage.log" type=RolloverSummary earliest=-30d@d latest=now | eval _time = _time - 43200 | bin _time span=1d | stats latest(b) AS b by slave,pool,_time | eval DailyGB=round(bytes/1024/1024/1024,2) | timechart sum(DailyGB) as "volume (GB)" span=1d @FrankVl The above gives me aggregated values across all the clusters.  How do I find out the usage per  indexers cluster? I have around 7-8 clusters. Any leads would be appreciated. Thanks
.
Hi @KendallW No I just want to update the SSO HTML file of this homepage.  
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt.... See more...
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt. If I have 10000 prompt, I can not found the event related to the email rapidly.  If it is possible that use rest api to post prompt decision to soar certain event?