All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs... See more...
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs from security devices like firewalls and load balancers  .   What is the recommended bandwidth between the intermediate forwarders and indexers? What is the recommended bandwidth between the UF agents and indexers?"
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  ins... See more...
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  installed and getting the logs.   Can someone please help me in creating a dashboard which shows all the details in a dashboard.  
Thanks! Do you perhaps know how I can summarize count only a user once per month per app. index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_mont... See more...
Thanks! Do you perhaps know how I can summarize count only a user once per month per app. index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month | dedup user,app, date_month | chart count by app date_month | sort app 0 This gives me a huge total for august but takes out the events for the other months
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 N... See more...
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 Nat non-prod In our query, We use a lookup command to match enrich the data using this lookup table. we match by account_id and get the corresponding owner and type as follows.   | lookup accounts.csv account_id OUTPUT account_owner type     In some events (depending on the source) , the account_id values contains a preceding 0 . But in our lookup table, the account_id column does not have a preceding 0.    Basically some events will have account_id = 12345  and some might have account_id=012345. They both are same accounts though.  Now, The lookup command displays the results when there is an exact exact matching account_id in events,   but fails when there is that extra 0 at the beginning. How to tune the lookup command to make it search the lookup table for both the conditions - with and without preceding 0 for the account_id field and even if one matches, it should produce the corresponding results ? Hope i am clear. I am unable to come with a regex for this.
Note that if you have any days where there are no results, you will not get a datapoint for that day for that source, so it will affect the average. You can probably resolve that if that's an issue.
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, sou... See more...
Instead of using timechart, you can use stats and bin by _time, e.g. | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=-7d@d latest=now by _time span=1d, source | stats sum(log_count) as sum_log by _time source | eventstats avg(sum_log) as avg_sum_log by source and then in your trellis give yourself an independent scale You seem to need the tstats AND stats to give yourself a trellis by source option.
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at ht... See more...
Hi @koshyk , ok, as I said, you have to create a cluster in each environment. Then Each Search Head Cluster, muste be configured as Search Head of each Indexer Cluster. For more details see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Configuremulti-clustersearch let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Agree Guiseppe. I was thinking if anyone in the community have done similar >> if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and ... See more...
Agree Guiseppe. I was thinking if anyone in the community have done similar >> if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and configuring Search Heads, in each Cloud, to search in all the clusters. Creating SHC and replications is not the problem; but the key is for the end-user, they have to Search from a single Search Head cluster and it should search in ALL the clusters in multi-cloud. (This could be SHC on top of another SHCs in each cloud or preferably directly to indexers on each clouds if feasible) Org. doesn't need data to be replicated between clouds, but we can create clusters within each cloud provider for Replicator factor.
Try this index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | chart count by app date_month
Hi @JandrevdM , what's the issue in your search? it seems to be correct, even if I'd semplify it: index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | stats count by date_mo... See more...
Hi @JandrevdM , what's the issue in your search? it seems to be correct, even if I'd semplify it: index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | stats count by date_month app | rename count as "Number of Users" | table date_month app "Number of Users" If you have many data to analyze, you could schedure this search, e.g. every night with the events of the day, saving results in a summary index and searching on the summary index. Ciao. Giuseppe
Hi @koshyk , at first, this isn't a question for the Community, but for a Certified Splunk Architect or a Splunk Professional Services Specialist. Anyway, yes it's possible, but you should define o... See more...
Hi @koshyk , at first, this isn't a question for the Community, but for a Certified Splunk Architect or a Splunk Professional Services Specialist. Anyway, yes it's possible, but you should define one question: do you want to replicate data across the clouds or not? if yes you have to create a multisite cluster, one for each different Cloud, configuring replications between them and configuring Search Heads, in each Cloud, to search in all the clusters. If not, you have to create a cluster in each Cloud and only configure Search Heads, in each Cloud, to search in all the clusters. Anyway, ths project requires the engagement of a specialist to analyze requirements and design the solution. Ciao. Giuseppe
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average... See more...
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three month... See more...
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three months display as columns with a count of amount of users. Example below Applications June(Month1) July(Month2) August(Month3) chatGPT 213 233 512   index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month ```| dedup user, app, date_month``` | stats count by date_month, app | sort date_month, app 0 | rename count as "Number of Users" | table date_month, app, "Number of Users"
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value... See more...
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hello @KendallW, Can you please help on a follow up question? In my case, I'm using HEC to get the logs in, the "source::" spec cannot distinguish the firewalls, can I use "host::" instead?
Thanks @richgalloway The solution worked .
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs... See more...
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs -  multi-cloud design for various applications. Application (and thus data) resides in AWS/Azure/GCP in multiple regions within Europe - Doesn't want to have lot of egress cost. So aggregating data into the cloud which Splunk was installed predominently is out of question. - The design is to have 'Data nodes' (Indexer clusters or Data clusters) in each of the application/data residing cloud providers - A Search Head cluster (Cross Cloud search) will be then spun in the main provider (eg AWS), which can then search ALL these remote 'Data nodes' Is this design feasible in Splunk? (I understand Mothership add-on, but my last encouter with it at enterprise scale was not that great) Looking for something like below with low latency
Yeah i got it, but is it possible to edit the HTML page? My goal is to change the URL so that when we click on the return to splunk, it takes us to customized URL @KendallW 
That error can have a few different causes. Check this post https://community.splunk.com/t5/Security/Why-does-Saml-response-not-contain-group-information/m-p/417494/highlight/true#M13278 
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have Does this conflict with the following? trauncated, the actual number of lines in JSON fo... See more...
@yuanliu , I see the whole event in a single line when I search for that event and on the indexer I have Does this conflict with the following? trauncated, the actual number of lines in JSON format is around 959 Lines. So Is there any limit setting on the search head to analyze whole event? Could you elaborate, maybe with some real examples? (Anonymize as needed.)