All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

how do I check my resources, please? although up until 2 days ago my Splunk has been operating well
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash... See more...
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash drive and writes a copy to the S3 storage.  The S3 storage copy is considered to be the 'master copy'.   I try not to use the term 'warm bucket', but instead use 'cached bucket'.  All searches are performed on either hot or cached buckets on the local flash array.  Cached buckets are eligible for eviction from local storage by the cache manager.  So if your search needs a bucket that is not on the local storage, it will evict eligible cached buckets, retrieve the buckets from S3 storage and then perform the search. The frozenTimePeriod defines our overall retention time.  We use hotlist_recency_secs to define when a cached bucket is eligible for eviction.  That is. buckets less than the hotlist_recency_secs age are not eligible for eviction.  Our statistics show that probably 90% of the queries have a time span of 7 days or less (research gosplunk.com for query).  Thus, by setting the hotlist_recency_sec to 14 days, we are ensured that the search buckets are on local, searchable storage w/o having to reach out to the S3 storage (which is slower). One last thing.  We need a 1 yr searchable retention.  However, we also need to keep 30 months total retention.  To accomplish this, I use ingest actions to the S3 storage.  Ingest actions will write the events in compressed json format by year, months, day, and sourcetype.   Hope this helps.
On my splunk instance while using cyberchef for Splunk, I encounter a message  that the last build was 2 years ago. I checked splunkbase and apps.splunk.com which only has the latest version from ove... See more...
On my splunk instance while using cyberchef for Splunk, I encounter a message  that the last build was 2 years ago. I checked splunkbase and apps.splunk.com which only has the latest version from over two years ago. Any suggestions on how I can get this app upgraded or am I just kinda stuck where I am for now until they come up with an upgrade on splunkbase?
Hi Experts,  I have a list of dates in the field called my_date like below: 45123 45127 45130 How can I convert this?  Thank you!
In an ideal world, there would be a Checkmarx app downloadable from Splunkbase that contains connectors or API calls for Checkmarx to get logs into Splunk. Unfortunately there is no app for Checkmar... See more...
In an ideal world, there would be a Checkmarx app downloadable from Splunkbase that contains connectors or API calls for Checkmarx to get logs into Splunk. Unfortunately there is no app for Checkmarx, so you'll have to identify the logs you would like to index from Checkmarx, then find a way to get those logs into Splunk. I am not familiar with Checkmarx but if it has a regular "log export" setting, like a syslog output, or a webhook integration, then it could be configured to push its logs into Splunk as they are generated. Otherwise, you will have to identify the Checkmarx APIs that get the information you are looking for, then you can configure Splunk to make regular HTTPS requests to those APIs and index the responses. 
Hi @hfaz , not deployment.conf but deploymentclient.conf file! In other words, check if, for error, you conigured also the HF as client. Ciao. Giuseppe
Hi PaulPanther, My Splunk version is 9.0.3. I tried the method within this link, but still couldn't solve the issue.
Hello, Thanks for your answer. I don't have a deployment.conf file in the HF, only the clients. The problem is that i need to turn Indexing on the HF in order to finally get the panel showing on HF'... See more...
Hello, Thanks for your answer. I don't have a deployment.conf file in the HF, only the clients. The problem is that i need to turn Indexing on the HF in order to finally get the panel showing on HF's Forwarder management. Isn't there another solution?
Yes, what you're describing is possible and it's a common approach to collect logs from devices that can't directly forward logs to Splunk. Here's a high-level overview of the steps involved: Conf... See more...
Yes, what you're describing is possible and it's a common approach to collect logs from devices that can't directly forward logs to Splunk. Here's a high-level overview of the steps involved: Configure Printers to Send Logs to Print Server: You'll need to configure your printers to send their logs to a specific location on the print server. This might involve setting up syslog or other logging configurations on the printers themselves to point to the print server's IP address and designate a specific directory for log files. Set Up a Log Forwarder on Print Server: On the print server, you'll need to set up a log forwarder to monitor the directory where the printers are sending their logs. This can be done using Splunk Universal Forwarder or any other log forwarding mechanism suitable for your environment (like syslog-ng). Configure Splunk Forwarder to Monitor Log Directory: Once the print server is receiving logs from the printers, you'll need to configure the Splunk forwarder on the print server to monitor the directory where the logs are being received. This involves adding a new monitor stanza in the inputs.conf file of the Splunk forwarder. Verify and Test Configuration: After configuring everything, you'll need to verify that logs are being received by the print server from the printers and that the Splunk forwarder on the print server is successfully forwarding those logs to your Splunk indexer or another forwarder. In a nutshell, the idea to have everything available at one place and monitor instead of onboarding / installing TA individually on each host.   Please accept the solution and hit Karma, if this helps!
Hello @aydinmo This could be because of Data Model Acceleration Enforcement. What it does is, even if you turn DMA On/Off - it will enforce the default behaviour. Can you please check the configura... See more...
Hello @aydinmo This could be because of Data Model Acceleration Enforcement. What it does is, even if you turn DMA On/Off - it will enforce the default behaviour. Can you please check the configurations from Settings -> Data Model Acceleration Enforcement Settings and Enable / Disable the default behaviour as required. Below screenshot for your reference -      Here is the Splunk Doc for your reference - https://docs.splunk.com/Documentation/ES/7.3.0/Install/Datamodels#Data_model_acceleration_enforcement   Please accept the solution and hit Karma, if this helps!
Hi @sle , if you use earliest and/or latest fields in your main search, this value override the values that you have in the Time Picker, that's not relevant. Ciao. Giuseppe
Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: i... See more...
Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="front end",resp_bytes,0))+(if(id.orig_h="front end",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe why did you used all that parenthesis? Ciao. Giuseppe
Hi @gauravu_14 , in general, having a lookup containing the host to monitor list, you can use a search like this: | tstats count WHERE index=* BY host | append [ | inputlookup your_lookup.csv | eva... See more...
Hi @gauravu_14 , in general, having a lookup containing the host to monitor list, you can use a search like this: | tstats count WHERE index=* BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if you are monitoring some clusters, you should have in the lookup the indication of the clusters, something like this: primary_host secondary_host host1 host1bis host2 host3 host3bis host4 and run a little different search:   | tstats count WHERE index=* BY host | lookup your_lookup.csv primary_host AS host OUTPUT secondary_host | lookup your_lookup.csv seondary_host AS host OUTPUT primary_host | append [ | inputlookup your_lookup.csv | rename primary_host AS host | eval count=0 | fields host count ] | append [ | inputlookup your_lookup.csv | rename secondary_host AS host | eval count=0 | fields host count ] | stats sum(count) AS total values(primary_host) AS primary_host values(secondary_host) AS secondary_host BY host | where total=0 AND NOT (primary_host=* secondary_host=*) About the indexes related to the not sending hosts, it's more difficoult because you don't have, in this search the information about the indexes, the only way is to store in the lookup also the information about the indexes usually used, in this case you can add this information in the stats commands: | tstats count WHERE index=* BY host | lookup your_lookup.csv primary_host AS host OUTPUT secondary_host indexes | lookup your_lookup.csv seondary_host AS host OUTPUT primary_host indexes | append [ | inputlookup your_lookup.csv | rename primary_host AS host | eval count=0 | fields host count ] | append [ | inputlookup your_lookup.csv | rename secondary_host AS host | eval count=0 | fields host count ] | stats sum(count) AS total values(primary_host) AS primary_host values(secondary_host) AS secondary_host values(indexes) AS indexes BY host | where total=0 AND NOT (primary_host=* secondary_host=*) Ciao. Giuseppe
Hi All We have DB agents and the SQL servers are still using TLS 1.1 and 1.0. Can this affect the DB metrics reporting to AppD.  Regards Fadil
Hi @mfonisso, what are the resources of your Splunk server? Splunk requires at least 12 CPUs and 12 GB RAM (more if you have ES or ITSI) and a disk with at least 800 IOPS. Ciao. Giuseppe
Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answe... See more...
Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answers. Anyway, the approach is to use eval not format command and round the number: | eval count=round(count/1000000,2)."M" please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated   
Hi @Ryan.Paredez , Thank yo for this. Actually I was having this concern for another account.  Regards Fadil
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should ... See more...
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should display 6mil. How I can achieve? I tried using--> | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|fieldformat count = count/1000000 But this does not work. Any help is appreciated.
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regula... See more...
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regularly because it's a common use case.)  So, here is a shot:   |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   Here is my full simulation   | mstats max(_value) as Trans where index=_metrics metric_name = spl.mlog.bucket_metrics.* earliest=-8h@h latest=-4h@h by metric_name span=1h | rename metric_name as application ``` the above simulates |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application ``` | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   My output is _time application Trans delta pct_delta 2024-03-28 12:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_total 214.000000 -1.000000 -0.4672897 2024-03-28 15:00 spl.mlog.bucket_metrics.current_total 214.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.frozen 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.frozen 0.000000 -1.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.total_removed 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.total_removed 0.000000 -1.000000   Obviously my results have lots of nulls because lots of my "Trans" values are zero.  But you get the idea.
Reached out to their support team at education@splunk.com and they resolved it for me.