Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. ...
See more...
Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. Then you can try installing some apps with modular inputs and configuring them. And actually, adding data is not really much of a cybersecurity task. It's more of an admin chore.
We can only see that the server is throwing a 500 error. We can't tell why. There should be something more in the logs. Check out _internal to see what's going on.
I'm looking for a tool for tracking changes of knowledge objects only in an app, not all under $SPLUNK_HOME/etc, but $SPLUNK_HOME/etc/apps/my_app_to_track Which tool can support that? Thanks!
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on. I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA be...
See more...
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on. I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA before, but recently I've redone my test environment to migrate from a CentOS to RHEL, so I reinstalled Splunk with the latest version and all apps on their latest versions as well. Here are the errors: What am I doing wrong to get these errors?
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws. I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the univer...
See more...
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws. I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the universal forwarder. but how do i actually go about ingesting the data into a S3 bucket to then be forwarded to splunk. Yes I know i can ingest sample data straight into splunk but I am trying to get real world experience to get a job in cybersecurity!
A simple way to do it is to remove one indexer from the cluster and run the cluster with a single indexer. You still will need a CM, but you will save storage and 3 servers (2 SH and 1 Idx). Use th...
See more...
A simple way to do it is to remove one indexer from the cluster and run the cluster with a single indexer. You still will need a CM, but you will save storage and 3 servers (2 SH and 1 Idx). Use the offline command to take down one indexer (maintenance mode not needed) and the CM will ensure all data exists on the remaining indexer (which it should already). splunk offline --enforce-counts
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to ...
See more...
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to break the indexer cluster to free storage, cpu and mem, i've found docs about removing nodes but keeping the cluster. We want to keep just one search head (the one with license master) and one indexer. Is there documentation to "break" the cluster and keep just one indexer in stand alone mode? (we need to keep info for "auditing reasons"). I know i can just put one in maintenance mode and power off but this procedure is intended to reboot/replace in some time the "faulty" indexer, not to keep it down for ever and ever. Regards.
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to inge...
See more...
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to ingest that report. Splunk (splunkd.log) shows error when I try to ingest it. I think, I need to setup a props.conf but I am not sure, how to go about doing it. This is an air gapped system, just FYI. splunkd.log ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/var/log/audit/clamav_scan_20240916_111846.log). Last time we saw this, filename was different. You may wish to use larger initCrcLen for this sourcetype or a CRC salt on this source. Clamav scan generates log file as shown below: -----------SCAN SUMMARY-------------- Known Viruses: xxxxxx Engine Version: x.xx.x Scanned Directories: xxx Scanned Files: xxxxx Infected Files: x Data Scanned: xxxxMB Data Read: xxxxMB Time: Start Date: 2024:09:16 14:46:58 End Date: 2024:09:16 16:33:06
Sorry, mate, but the level of completness of your description is comparable to "ok, I replaced the flat tyre but now I cannot put the car in gear - can it be a problem with a battery?". We nave no i...
See more...
Sorry, mate, but the level of completness of your description is comparable to "ok, I replaced the flat tyre but now I cannot put the car in gear - can it be a problem with a battery?". We nave no idea about what your setup looks like, what hosts you have, what configs. How can we know what's wrong?
As luck would have it
This works :
| inputlookup servers | dedup host | sort host | table host host
However, this does not
| inputlookup servers where environment = $token$
|...
See more...
As luck would have it
This works :
| inputlookup servers | dedup host | sort host | table host host
However, this does not
| inputlookup servers where environment = $token$
| dedup host | sort host | table host host
.........
If I replace $token$ w/ "DEV" (which is what is in the table) it works. I know the $token$ has the value, out of scope maybe?
I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For e...
See more...
I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For example, the Search and Reporting app has the White arrow on a green background to the left of the app name. How do I get the icon to be placed left of the app name?
The app itself seems to be downloadable and usable on on-prem Splunk Enterprise, but it sends data to an offsite service which does the AI processing work. Src: https://docs.splunk.com/Documentation...
See more...
The app itself seems to be downloadable and usable on on-prem Splunk Enterprise, but it sends data to an offsite service which does the AI processing work. Src: https://docs.splunk.com/Documentation/AIAssistant/1.0.3/User/AboutAIAssistant Where Splunk AI Assistant for SPL runs Splunk AI Assistant for SPL runs as a separate component of Splunk Cloud Platform (SCP) which is not metered like searches are against data indexed by Splunk. For version 1.0.0 and higher the SPL generated by the assistant requires a separate step to Open in Search. Searches executed in the Search app work like any other Splunk search, and consume SVC resources accordingly. Splunk AI Assistant for SPL runs on AI Service, a multi-tenant, cloud service, hosted in Splunk Cloud Platform. This AI Service makes GPUs available for generating responses to customer prompts. All the AI compute is offloaded to AI Service and no AI compute is running on the customer's search head. I don't know whether Splunk plans to release a fully on-prem version. The Splunk company used to provide a preview version which was fully on-prem, but that program is no longer available. https://www.splunk.com/en_us/blog/platform/flatten-the-spl-learning-curve-introducing-splunk-ai-assistant-for-spl.html
One thing to consider is that if/when your HEC receiver crashes you will lost those evens unless you have configured indexing ack into use and your HEC sender/client had implemented it into use! First...
See more...
One thing to consider is that if/when your HEC receiver crashes you will lost those evens unless you have configured indexing ack into use and your HEC sender/client had implemented it into use! First part is an easy step, but second part isn’t! Also when you are using LB before multiple HEC nodes you will be get some duplicate events time by time.
Under settings, there is an option to change Lookups, it is there that you will find Lookup definitions - add a new one specifying the csv lookup file you want to define.
As a test, I first created some credit card numbers using a python script.
I placed the script, along with inputs and props, on the search head. I only placed props on the indexers.
The following...
See more...
As a test, I first created some credit card numbers using a python script.
I placed the script, along with inputs and props, on the search head. I only placed props on the indexers.
The following SEDCMD will mask the 1st and 3rd set of 4-digits. The two groups (2nd and 4th set of 4-digits) will not be masked.
props:
[cc_generator]
SEDCMD-maskcc = s/\d{4}-(\d{4})-\d{4}-(\d{4})/xxxx-\1-xxxx-\2/g
inputs:
[script://./bin/my_cc_generator.py]
interval = */30 * * * *
sourcetype = cc_generator
disabled = 0
index = mypython
output:
xxxx-9874-xxxx-9484
Hi @ITWhisperer - yes you are correct, that field is populated with subnet values, the lookup file is like this: cidr provider area ...
See more...
Hi @ITWhisperer - yes you are correct, that field is populated with subnet values, the lookup file is like this: cidr provider area zone region 1.1.1.1/24 Unit 1 Finance 2 US 2.2.2.2/27 Unit 2 HR 16 UK I am unsure of how to go about creating a lookup definition with advanced setting for match type CIDR(cidr)?