All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1... See more...
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1d count by server Because the logs are only kept for 1 month, and in recent month, logs are only in server 127881-p@23p. So in the splunk query result, we only can see 1 column: 127881-p@23p   May I ask how to make the result has 3 columns: 127881-p@23p, 127881-p@24p, 127881-p@25p And since there is no logs in 24p and 25p rencently, the values for 24p and 25p are 0.   Thanks a lot!  
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macr... See more...
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macro in search bar it gives correct results. Does anyone know how can i solve this?
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't... See more...
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't contain any sensitive data, our Cyber Security team deem it as a vulnability that need to be fix. I want to know how to either disable that url, or redirect it to the login page. Any help would be very apriciate. 
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is sh... See more...
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is showed as below: >>> {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59947791867743, "txbytes": 37019023811192} {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755935850903, "txbytes": 32252936430552} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948210937804, "txbytes": 37019791801583} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755965708078, "txbytes": 32253021060643} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948636904106, "txbytes": 37020560028933} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756002542165, "txbytes": 32253111011234} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949094737896, "txbytes": 37021330717977} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756101313559, "txbytes": 32253199085252} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949550987330, "txbytes": 37022105630147} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756167141302, "txbytes": 32253286546113} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949968397016, "txbytes": 37022870539739} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756401499253, "txbytes": 32253380028970} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} <<< Now I need to create one chart to show the value of "rxbytes" over time, with 4 series: (series 1) fw1, interface1/1 (series 2) fw1, interface1/2 (series 3) fw2, interface1/1 (series 4) fw2, interface1/2 But I have problem to compose the SPL statement for this purpose. can you please help here? thank you in advance!
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by u... See more...
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by user_id | where count >1 but can't seem to get it to work. Hoping to be able to display the count as a single number as well as timechart it so I can show the number over the last X months.. Any suggestions? It feels like it should've been easier than it has been!
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas... See more...
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas, or anyone has tried this?
So there is no way to customize these letters/abbreviations?
Let's add some additional stuff to the mix. 1. Raw number of events is one thing, but their size also matters. It's a different thing to send 1000 of short syslog-received messages and completely an... See more...
Let's add some additional stuff to the mix. 1. Raw number of events is one thing, but their size also matters. It's a different thing to send 1000 of short syslog-received messages and completely another thing to send 1000 several kilobytes long stack dumps from java app. 2. Technically at some point you will hit some limit (after all server memory doesn't grow on trees ;-). But probably sending tenths of iso images within a single batch request isn't what you're aiming at 3. And finally, even if you're not using acks, will your source be able to resend the events batch from a given point should any error happen in the middle of the batch and only some events were accepted?
That's understandable. Your files consist mostly of a relatively constant part repeated across all files (the header and some relatively constant fields) so Splunk will be guessing that it's all the ... See more...
That's understandable. Your files consist mostly of a relatively constant part repeated across all files (the header and some relatively constant fields) so Splunk will be guessing that it's all the same file. If the filenames are unique and the files are not rotated in any way, you can use crcSalt=<SOURCE> (That's actually one of the rare cases it can actually make sense). Otherwise, raise initCrcLength so that it catches variable parts of the event. As a side note, it seems that the event is very verbose and could use some serious editing on ingest to save on license (you don't need majority of the raw data). Additional questin is whether there should be any event breaking done within a single fioe.
I've never done this myself (usually you grow from a stand-alone instance to clustered environment) but there is no real reason why one of your indexers shouldn't work as a stand-alone machine. Of co... See more...
I've never done this myself (usually you grow from a stand-alone instance to clustered environment) but there is no real reason why one of your indexers shouldn't work as a stand-alone machine. Of course you know how to remove one indexer from the cluster (I hope you don't have rf=sf=1). If you have rf=2, sf=1 and relatively symmetrical primaries distribution, you might  need extra storage since Splunk will have to rebuild index files from raw data on the remaining indexer. If you have rf=sf=2, you'll just get one indexer down and that's it. One caveat - since your rf/sf will not be met with just one indexer, your cluster will be searchable but not complete since you'll always be missing the other indexer.
Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. ... See more...
Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. Then you can try installing some apps with modular inputs and configuring them. And actually, adding data is not really much of a cybersecurity task. It's more of an admin chore.
We can only see that the server is throwing a 500 error. We can't tell why. There should be something more in the logs. Check out _internal to see what's going on.
I'm looking for a tool for tracking changes of knowledge objects only in an app, not all under $SPLUNK_HOME/etc, but $SPLUNK_HOME/etc/apps/my_app_to_track Which tool can support that? Thanks!
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA be... See more...
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA before, but recently I've redone my test environment to migrate from a CentOS to RHEL, so I reinstalled Splunk with the latest version and all apps on their latest versions as well. Here are the errors:   What am I doing wrong to get these errors?   
Are there any plans to support this app on cloud.
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the univer... See more...
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the universal forwarder. but how do i actually go about ingesting the data into a S3 bucket to then be forwarded to splunk.  Yes I know i can ingest sample data straight into splunk but I am trying to get real world experience to get a job in cybersecurity!
A simple way to do it is to remove one indexer from the cluster and run the cluster with a single indexer.  You still will need a CM, but you will save storage and 3 servers (2 SH and 1 Idx). Use th... See more...
A simple way to do it is to remove one indexer from the cluster and run the cluster with a single indexer.  You still will need a CM, but you will save storage and 3 servers (2 SH and 1 Idx). Use the offline command to take down one indexer (maintenance mode not needed) and the CM will ensure all data exists on the remaining indexer (which it should already). splunk offline --enforce-counts
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to ... See more...
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to break the indexer cluster to free storage, cpu and mem, i've found docs about removing nodes but keeping the cluster.  We want to keep just one search head (the one with license master) and one indexer.  Is there documentation to "break" the cluster and keep just one indexer in stand alone mode? (we need to keep info for "auditing reasons").  I know i can just put one in maintenance mode and power off but this procedure is intended to reboot/replace in some time the "faulty" indexer, not to keep it down for ever and ever.  Regards.  
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to inge... See more...
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to ingest that report. Splunk (splunkd.log) shows error when I try to ingest it. I think, I need to setup a props.conf but I am not sure, how to go about doing it. This is an air gapped system, just FYI.  splunkd.log ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/var/log/audit/clamav_scan_20240916_111846.log). Last time we saw this, filename was different. You may wish to use larger initCrcLen for this sourcetype or a CRC salt on this source. Clamav scan generates log file as shown below: -----------SCAN SUMMARY-------------- Known Viruses: xxxxxx Engine Version: x.xx.x Scanned Directories: xxx Scanned Files: xxxxx Infected Files: x Data Scanned: xxxxMB Data Read: xxxxMB Time: Start Date: 2024:09:16 14:46:58 End Date: 2024:09:16 16:33:06
Thanks for that info, I have added the lookup definitions but the results are still outputting the same.