Extracting Splunk 9.3.0 on a linux build I see a file in the /opt/splunk/ called splunk-9.3.0-<buildhash>-linux-2.6-x86_64-manifest. That file lists every directory and file along with permission co...
See more...
Extracting Splunk 9.3.0 on a linux build I see a file in the /opt/splunk/ called splunk-9.3.0-<buildhash>-linux-2.6-x86_64-manifest. That file lists every directory and file along with permission codes that should exist for your install. The integrity checks are based upon that manifest file. If you have a 2.7 folder still inside your install you should be safe to delete it, especially if the folder has zero hidden or displayed files/folders.
Hi @Sangeeta_1 , with my search you should have the latest timestamp for each host, if you have future dates, probably you have some event not correctly parsed because it has future timestamps. Cia...
See more...
Hi @Sangeeta_1 , with my search you should have the latest timestamp for each host, if you have future dates, probably you have some event not correctly parsed because it has future timestamps. Ciao. Giuseppe
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1...
See more...
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1 I want the output to be instead.. july_value august_value september_value 5 3 1 I am able to get the correct dynamic value of each month via | eval current_value = strftime(relative_time(now(), "@mon"), "%B")+."_value" However, i'm unsure on how to change the field name directly in the table. Thanks in advance!
Hi @ITWhisperer Thanks for your comment, but metadata contains limited to a certain time in history, like I can get the data for only last 30 days or so.
Thanks @gcusello for the help. But I am getting future dates like below, but the search was for the last time when I am getting any event w.r.t all the host. I have selected date range as all time. C...
See more...
Thanks @gcusello for the help. But I am getting future dates like below, but the search was for the last time when I am getting any event w.r.t all the host. I have selected date range as all time. Can you please suggest here? 2031-12-11 08:40:08 2025-01-11 09:05:56 2024-10-30 08:12:49
Please don't every disable SSL for HTTP Event Collection - this is purely from a security stand point. If you absolutely must have an HTTP only connection please setup a separate HF for this purpos...
See more...
Please don't every disable SSL for HTTP Event Collection - this is purely from a security stand point. If you absolutely must have an HTTP only connection please setup a separate HF for this purpose. Never expose your indexing tier to non-SSL connections.
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Outputsconf There are many options available in the outputs.conf.spec sheet. You can start setting queue and buffers but be cautious that da...
See more...
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Outputsconf There are many options available in the outputs.conf.spec sheet. You can start setting queue and buffers but be cautious that data in queues and buffers can age out and risk no ingestion. The other thing is try setting compression to reduce the network traffic demands but it will increase the CPU demands on source and destination so make sure that you have cycles to spare.
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before. As an error it says 'Unauthorized access'. But I don't have any cred...
See more...
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before. As an error it says 'Unauthorized access'. But I don't have any credentials for that. Environment="SC4S_IMAGE=docker.io/splunk/scs:latest" Could you help me please how to fix it? Thanks,
Thanks @gcusello, But this will create a multiple fields, but I wish to have this in a single field and results duplicated as each entity. So it'll be easy for me to use lookup join Example Dat...
See more...
Thanks @gcusello, But this will create a multiple fields, but I wish to have this in a single field and results duplicated as each entity. So it'll be easy for me to use lookup join Example Dataset: USER Rate Priority UX101 1.4 2 UX101 2.3 4 UX342 4.6 5 UX515 7.3 1 UX515 2.1 3 Expecting Output: USER Rate Priority UX101 1.4 1 UX101 1.4 2 UX101 2.3 3 UX101 2.3 4 UX101 2.3 5 UX342 4.6 1 UX342 4.6 2 UX342 4.6 3 UX342 4.6 4 UX342 4.6 5 UX515 7.3 1 UX515 7.3 2 UX515 7.3 3 UX515 7.3 4 UX515 7.3 5
I am trying to write an eval expression to translate a few different languages into English. One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my q...
See more...
I am trying to write an eval expression to translate a few different languages into English. One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my query, my cursor location is no longer predictable, and I cannot copy/paste the Hebrew into an otherwise left to right query expression. I then tried to create a macro to do the evaluation, but I ran into the same issue. Even using a different browser(Firefox vs. Brave), or a different program (notepad++), but I always encounter the cursor/keyboard anomalies after pasting the text into my query. I need to translate a few different strings within a case eval expression. Is anyone aware of any similar issues being encountered and/or of any potential work arounds? Does someone have an alternate suggestion as to how I can accomplish the translations?
Here is an example of what I am trying to do:
| eval appName = case(appName="플레이어","player",appName="티빙","Tving",appName=...
This Hebrew text is an example of where I run into issues:
כאן ארכיון
Hi @arjun_ananth , let me know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @tsocyberoperati , in props.conf you can choose a source or an host to filter. If choosing the source, you can find with a regex the hostname in your logs you can solve your issue: e.g. if your...
See more...
Hi @tsocyberoperati , in props.conf you can choose a source or an host to filter. If choosing the source, you can find with a regex the hostname in your logs you can solve your issue: e.g. if your source is "/opt/tmp/files/myfile.txt" and the host name is contained in the logs and it's "my_host", you could try: in props.conf [source::/opt/tmp/files/myfile.txt]
TRANSFORMS-hostA = send_to_syslog in transforms.conf [send_to_syslog]
REGEX = my_host
DEST_KEY = _SYSLOG_ROUTING
FORMAT = my_syslog_group The only limit is that the hostname must be contained in all events.. Ciao. Giuseppe
hi @Teddiz , perimeter.csv is a csv file containing only one column (host) and the list of the hostname to monitor: host
my_host1
my_host2
my_host3
my_host4 Ciao. Giuseppe
Hi @att35 , for my knowledge, Splunk doesn't index twice a file, unless you use crcSalt=<SOURCE>. In this case the file name (and not the content) guides the indexeing, but two files with the same...
See more...
Hi @att35 , for my knowledge, Splunk doesn't index twice a file, unless you use crcSalt=<SOURCE>. In this case the file name (and not the content) guides the indexeing, but two files with the same name (path and filename) cannot be indexed twice. You can check if you have duplicated logs from the same file with a simple search like the following: index=*
| stats dc(_raw) AS raw_count BY source
| where raw_count>1 Ciao. Giuseppe
Hi, We use Splunk Forwarder to monitor application data. There are multiple folders on a given server, each with same set of log files, but since the folder names are a distinguishing factor, we are...
See more...
Hi, We use Splunk Forwarder to monitor application data. There are multiple folders on a given server, each with same set of log files, but since the folder names are a distinguishing factor, we are using crcSalt=<SOURCE> so that Splunk treats all log files differently. We also make sure to lock the stanza to a specific extension as needed, e.g. logname.log, or log*.txt, so that rotated files are ignored. That being said, I still want to find out if there are any situations where splunk could be re-indexing files multiple times and might warrant the use of initCrcLen instead. Is this something that's possible via search? Does Splunk forwarder keeps some type of internal record/tracker that it is now re-indexing previously seen file again? Thanks,
Thank you for all of this. Every bit of information will be helpful. Believe me, if I could, I would hire a whole team for this. But I'm just an average security guy here who "has some clue about...
See more...
Thank you for all of this. Every bit of information will be helpful. Believe me, if I could, I would hire a whole team for this. But I'm just an average security guy here who "has some clue about Splunk". The wallet is owned by someone else... BR.