All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Thanks for quick response. I have tried both the options below: Option-1 | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother... See more...
Hi, Thanks for quick response. I have tried both the options below: Option-1 | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|eval status=round(status/1000000,2)."M" Option-2 | eval status = if(match(status, "20/[0-1]/"), "success(200 and 201)",status)| eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|eval count=round(count/1000000,2)."M" But in my graph I dont see any difference. I still see large number instead of shorten number with M appended. Below is the output This is the output which still shows large number.        
It depends on what this number is supposed to represent
I would like to create a scheduled search sending multi-line Slack notification via Splunk API.  I can create the search, there's no problem. Slack notification also works, but only limit to a singl... See more...
I would like to create a scheduled search sending multi-line Slack notification via Splunk API.  I can create the search, there's no problem. Slack notification also works, but only limit to a single line notification. I would like to split the notification into multi-lines. I am using "Slack Notification Alert" App and I have tried a few characters like "\n", "\r", "<br />", "\" and none of them worked. It seems that all of these are escaped and the Slack message is still a one-liner like "test\ntest" instead of "test test" Of course I can use a browser to go to Splunk web UI and change it there but we need to do this in scale and changing it manually instead of via API is not efficient at all. Please help, thanks a lot! Slack Notification Alert
Thank you! The first appendpipe achieved the desired objective! The size constraint should not be a problem because I had all the unixtime snapped to the month with @mon so there's only 300 rows in t... See more...
Thank you! The first appendpipe achieved the desired objective! The size constraint should not be a problem because I had all the unixtime snapped to the month with @mon so there's only 300 rows in this table. The way to explain this odd situation is that each day, we get the data dump of the population but their field values may change by the day. The issue is that Splunk has a 90 day data retention policy for our events. So basing events purely on _time only goes back 90 days. BUT, in our events, there are additional unixtime fields (two to be exact) that go back much further than 90 days and we needed to use these to provide a historical month by month view (hence snapping unixtime with @mon). Total_A was the total sum of the population over time based on Unixtime_A, and Total_B is a conditional sum of the population where only a field met a condition, and Unixtime_B contained the time this condition was first met. That's why I wanted Total_A and Total_B to be seperate, but Unixtime_A and Unixtime_B could be appended together. To put some context to it, Total_A is total vulnerabilities population regardless of whether it was fixed or active based on Unixtime_A being when it was first detected. Total_B is total fixed vulnerabilities population based on Unixtime_B being when it was fixed.
Hello, Yes sorry i meant deploymentclient.conf, i didn't configure HF as a client at all. All I did was pointing the client towards the HF and turning and forwarding on in the HF aswell.
apologies for all the parenthesis, I was just trying to keep things straight in my head. There's definitely a better way to frame the query.  I tried what you suggested with: if(id.resp_h="front en... See more...
apologies for all the parenthesis, I was just trying to keep things straight in my head. There's definitely a better way to frame the query.  I tried what you suggested with: if(id.resp_h="front end",resp_bytes,0) even simplifying the expression to filter on one ip address at a time gives an error. trying to use it like this  index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h=192.168.0.1,resp_bytes,0))+(if(id.orig_h=192.168.0.1,orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) I just get an error back from splunk.  Error in EvalCommand: the number 192.168.0.1 is invalid
Can I do this buy source?  and does  source can take different props conf.
After attending a Splunk 9.2 webinar yesterday (3/28/24), I pulled a fresh docker container down using the  "latest" tag and found that I had v.9.0.9 rather than v.9.2.1. Is it possible that this is ... See more...
After attending a Splunk 9.2 webinar yesterday (3/28/24), I pulled a fresh docker container down using the  "latest" tag and found that I had v.9.0.9 rather than v.9.2.1. Is it possible that this is a  reoccurrence  of a build issue mentioned in this old post https://community.splunk.com/t5/Deployment-Architecture/Why-is-Docker-latest-not-on-most-recent-version/td-p/600958 ?
how do I check my resources, please? although up until 2 days ago my Splunk has been operating well
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash... See more...
So here is my understanding and the way that I've got our on-prem instance configured. hot buckets are stored on a local flash array.  When the bucket closes, it keeps the closed bucket on the flash drive and writes a copy to the S3 storage.  The S3 storage copy is considered to be the 'master copy'.   I try not to use the term 'warm bucket', but instead use 'cached bucket'.  All searches are performed on either hot or cached buckets on the local flash array.  Cached buckets are eligible for eviction from local storage by the cache manager.  So if your search needs a bucket that is not on the local storage, it will evict eligible cached buckets, retrieve the buckets from S3 storage and then perform the search. The frozenTimePeriod defines our overall retention time.  We use hotlist_recency_secs to define when a cached bucket is eligible for eviction.  That is. buckets less than the hotlist_recency_secs age are not eligible for eviction.  Our statistics show that probably 90% of the queries have a time span of 7 days or less (research gosplunk.com for query).  Thus, by setting the hotlist_recency_sec to 14 days, we are ensured that the search buckets are on local, searchable storage w/o having to reach out to the S3 storage (which is slower). One last thing.  We need a 1 yr searchable retention.  However, we also need to keep 30 months total retention.  To accomplish this, I use ingest actions to the S3 storage.  Ingest actions will write the events in compressed json format by year, months, day, and sourcetype.   Hope this helps.
On my splunk instance while using cyberchef for Splunk, I encounter a message  that the last build was 2 years ago. I checked splunkbase and apps.splunk.com which only has the latest version from ove... See more...
On my splunk instance while using cyberchef for Splunk, I encounter a message  that the last build was 2 years ago. I checked splunkbase and apps.splunk.com which only has the latest version from over two years ago. Any suggestions on how I can get this app upgraded or am I just kinda stuck where I am for now until they come up with an upgrade on splunkbase?
Hi Experts,  I have a list of dates in the field called my_date like below: 45123 45127 45130 How can I convert this?  Thank you!
In an ideal world, there would be a Checkmarx app downloadable from Splunkbase that contains connectors or API calls for Checkmarx to get logs into Splunk. Unfortunately there is no app for Checkmar... See more...
In an ideal world, there would be a Checkmarx app downloadable from Splunkbase that contains connectors or API calls for Checkmarx to get logs into Splunk. Unfortunately there is no app for Checkmarx, so you'll have to identify the logs you would like to index from Checkmarx, then find a way to get those logs into Splunk. I am not familiar with Checkmarx but if it has a regular "log export" setting, like a syslog output, or a webhook integration, then it could be configured to push its logs into Splunk as they are generated. Otherwise, you will have to identify the Checkmarx APIs that get the information you are looking for, then you can configure Splunk to make regular HTTPS requests to those APIs and index the responses. 
Hi @hfaz , not deployment.conf but deploymentclient.conf file! In other words, check if, for error, you conigured also the HF as client. Ciao. Giuseppe
Hi PaulPanther, My Splunk version is 9.0.3. I tried the method within this link, but still couldn't solve the issue.
Hello, Thanks for your answer. I don't have a deployment.conf file in the HF, only the clients. The problem is that i need to turn Indexing on the HF in order to finally get the panel showing on HF'... See more...
Hello, Thanks for your answer. I don't have a deployment.conf file in the HF, only the clients. The problem is that i need to turn Indexing on the HF in order to finally get the panel showing on HF's Forwarder management. Isn't there another solution?
Yes, what you're describing is possible and it's a common approach to collect logs from devices that can't directly forward logs to Splunk. Here's a high-level overview of the steps involved: Conf... See more...
Yes, what you're describing is possible and it's a common approach to collect logs from devices that can't directly forward logs to Splunk. Here's a high-level overview of the steps involved: Configure Printers to Send Logs to Print Server: You'll need to configure your printers to send their logs to a specific location on the print server. This might involve setting up syslog or other logging configurations on the printers themselves to point to the print server's IP address and designate a specific directory for log files. Set Up a Log Forwarder on Print Server: On the print server, you'll need to set up a log forwarder to monitor the directory where the printers are sending their logs. This can be done using Splunk Universal Forwarder or any other log forwarding mechanism suitable for your environment (like syslog-ng). Configure Splunk Forwarder to Monitor Log Directory: Once the print server is receiving logs from the printers, you'll need to configure the Splunk forwarder on the print server to monitor the directory where the logs are being received. This involves adding a new monitor stanza in the inputs.conf file of the Splunk forwarder. Verify and Test Configuration: After configuring everything, you'll need to verify that logs are being received by the print server from the printers and that the Splunk forwarder on the print server is successfully forwarding those logs to your Splunk indexer or another forwarder. In a nutshell, the idea to have everything available at one place and monitor instead of onboarding / installing TA individually on each host.   Please accept the solution and hit Karma, if this helps!
Hello @aydinmo This could be because of Data Model Acceleration Enforcement. What it does is, even if you turn DMA On/Off - it will enforce the default behaviour. Can you please check the configura... See more...
Hello @aydinmo This could be because of Data Model Acceleration Enforcement. What it does is, even if you turn DMA On/Off - it will enforce the default behaviour. Can you please check the configurations from Settings -> Data Model Acceleration Enforcement Settings and Enable / Disable the default behaviour as required. Below screenshot for your reference -      Here is the Splunk Doc for your reference - https://docs.splunk.com/Documentation/ES/7.3.0/Install/Datamodels#Data_model_acceleration_enforcement   Please accept the solution and hit Karma, if this helps!
Hi @sle , if you use earliest and/or latest fields in your main search, this value override the values that you have in the Time Picker, that's not relevant. Ciao. Giuseppe
Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: i... See more...
Hi @taijusoup64 , let me understand: you want to calculate bytes only when:  id.orig_h="frontend" AND id.resp_h="frontend", is this correct? in this case add the condition to the eval statement: index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=((if(id.resp_h="front end",resp_bytes,0))+(if(id.orig_h="front end",orig_bytes,0)))/1024/1024/1024/1024 | stats sum (terabytes) Ciao. Giuseppe why did you used all that parenthesis? Ciao. Giuseppe