All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via ... See more...
Hello Splunk Community, We are currently using Splunk Enterprise 9.1.5 and DB Connect 3.7 to collect data from a Snowflake database view. The view returns data correctly when queried directly via SQL. Here are the specifics of our setup and the issue we're encountering: Data Collection Interval: Every 11 minutes Data Volume: Approximately 75,000 to 80,000 events per day, with peak times around 7 AM to 9 AM CST and 2 PM to 4 PM CST (approximately 20,000 events during these periods) Unique Identifier: The data contains a unique ID column generated by a sequence that increments by 1 Timestamp Column: The table includes a STARTDATE column, which is a Timestamp_NTZ (no timezone) in UTC time Our DB Connect configuration is as follows: Rising Column: ID Metadata: _time is set to the STARTDATE field The issue we're facing is that Splunk is not ingesting all the data; approximately 30% of the data is missing. The ID column has been verified to be unique, so we suspect that the STARTDATE might be causing the issue. Although each event has a unique ID, the STARTDATE may not be unique since multiple events can occur simultaneously in our large environment. Has anyone encountered a similar issue, or does anyone have suggestions on how to address this problem? Any insights would be greatly appreciated. Thank you!
If you have a search head cluster on prem try electing a new captain to force push a new SHC bundle. If that doesn't work then more information would be required about how user and roles are working... See more...
If you have a search head cluster on prem try electing a new captain to force push a new SHC bundle. If that doesn't work then more information would be required about how user and roles are working and if you have any thing has changed there.  Is there anything via auth .conf doesn't show up anymore.
Hi @Gravoc , at first check if the lookup name is correct (it's case sensitive). Then check if you see the lookup using the Splunk Lookup Editor App. Then check if you have created also the Lookup... See more...
Hi @Gravoc , at first check if the lookup name is correct (it's case sensitive). Then check if you see the lookup using the Splunk Lookup Editor App. Then check if you have created also the Lookup definition for this lookup. At least check the grants on lookup and lookup definition. Ciao. Giuseppe
Hi @tschmoney1337 , please share your full search because you can modify the field name in rows but not in columns. e.g. if you have a timestamp, you should use stats and eval, and then put in colu... See more...
Hi @tschmoney1337 , please share your full search because you can modify the field name in rows but not in columns. e.g. if you have a timestamp, you should use stats and eval, and then put in columns: <your_search> | bin span=1mon _time | stats count BY _time | eval current_value = strftime(_time, "%B")."_value" | table current_value count | transpose column_name=current_value header_field=current_value I cannopt test it , but it should be correct or very near. Ciao. Giuseppe
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this searc... See more...
Hi Splunk Experts, I hope to get a quick hint on my issue. I have a Splunk Cloud setup with two search heads, one of which is dedicated to Enterprise Security. I have different lookups on this search head containing, e.g., all user attributes. I wanted to enhance a specific search using the lookup command as described in the documentation. Additionally, I can access and view the lookup with the inputlookup command, confirming the file’s existence and proper permissions on the search head. The search I have trouble with (simplified):   index=main source_type=some_event_related_to_users | lookup ldap_users.csv identity as src_user   However, this search instantaneously fails with:   [idx-[...].splunkcloud.com,idx-[...].splunkcloud.com,idx-[...].splunkcloud.com] The lookup table 'ldap_users.csv' does not exist or is not available.     I must confess I am rather new to Splunk and even newer to running a Splunk cluster. So I do not really understand why my indexers are looking for the file in the first place. I assumed that the search head would handle the lookup. In addition, as I am a Splunk Cloud customer, I don’t have access to the indexers anyway. Can someone give me a pointer on how to achieve such a query in a Splunk Cloud Environment?
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And t... See more...
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And these logs are getting forwarded to Splunk cloud console via Cribl workers. And the Splunk cloud instance indexer and search head running with 9.2.2 version. Now, our ask is if we upgrade our Splunk UF and Splunk enterprise version on deployment servers from 9.1.2 to 9.3.0, will it impact the cloud components (due to compatibility issues) or will it not impact as these cloud components receives logs indirectly via cribl? Could you please clarify?
Extracting Splunk 9.3.0 on a linux build I see a file in the /opt/splunk/ called splunk-9.3.0-<buildhash>-linux-2.6-x86_64-manifest.  That file lists every directory and file along with permission co... See more...
Extracting Splunk 9.3.0 on a linux build I see a file in the /opt/splunk/ called splunk-9.3.0-<buildhash>-linux-2.6-x86_64-manifest.  That file lists every directory and file along with permission codes that should exist for your install.  The integrity checks are based upon that manifest file.  If you have a 2.7 folder still inside your install you should be safe to delete it, especially if the folder has zero hidden or displayed files/folders.
Hi @Sangeeta_1 , with my search you should have the latest timestamp for each host, if you have future dates, probably you have some event not correctly parsed because it has future timestamps. Cia... See more...
Hi @Sangeeta_1 , with my search you should have the latest timestamp for each host, if you have future dates, probably you have some event not correctly parsed because it has future timestamps. Ciao. Giuseppe
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1... See more...
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1   I want the output to be instead.. july_value august_value september_value 5 3 1   I am able to get the correct dynamic value of each month via | eval current_value = strftime(relative_time(now(), "@mon"), "%B")+."_value" However, i'm unsure on how to change the field name directly in the table. Thanks in advance!
Hi @ITWhisperer  Thanks for your comment, but metadata contains limited to a certain time in history, like I can get the data for only last 30 days or so.
Thanks @gcusello for the help. But I am getting future dates like below, but the search was for the last time when I am getting any event w.r.t all the host. I have selected date range as all time. C... See more...
Thanks @gcusello for the help. But I am getting future dates like below, but the search was for the last time when I am getting any event w.r.t all the host. I have selected date range as all time. Can you please suggest here? 2031-12-11 08:40:08 2025-01-11 09:05:56 2024-10-30 08:12:49
Please don't every disable SSL for HTTP Event Collection - this is purely from a security stand point.  If you absolutely must have an HTTP only connection please setup a separate HF for this purpos... See more...
Please don't every disable SSL for HTTP Event Collection - this is purely from a security stand point.  If you absolutely must have an HTTP only connection please setup a separate HF for this purpose.  Never expose your indexing tier to non-SSL connections.
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Outputsconf There are many options available in the outputs.conf.spec sheet.  You can start setting queue and buffers but be cautious that da... See more...
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Outputsconf There are many options available in the outputs.conf.spec sheet.  You can start setting queue and buffers but be cautious that data in queues and buffers can age out and risk no ingestion. The other thing is try setting compression to reduce the network traffic demands but it will increase the CPU demands on source and destination so make sure that you have cycles to spare.
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any cred... See more...
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any credentials for that.  Environment="SC4S_IMAGE=docker.io/splunk/scs:latest"  Could you help me please how to fix it? Thanks, 
Thanks @gcusello, But this will create a multiple fields, but I wish to have this in a single field and results duplicated as each entity. So it'll be easy for me to use lookup join Example Dat... See more...
Thanks @gcusello, But this will create a multiple fields, but I wish to have this in a single field and results duplicated as each entity. So it'll be easy for me to use lookup join Example Dataset: USER Rate Priority UX101 1.4 2 UX101 2.3 4 UX342 4.6 5 UX515 7.3 1 UX515 2.1 3   Expecting Output: USER Rate Priority UX101 1.4 1 UX101 1.4 2 UX101 2.3 3 UX101 2.3 4 UX101 2.3 5 UX342 4.6 1 UX342 4.6 2 UX342 4.6 3 UX342 4.6 4 UX342 4.6 5 UX515 7.3 1 UX515 7.3 2 UX515 7.3 3 UX515 7.3 4 UX515 7.3 5
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my q... See more...
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my query, my cursor location is no longer predictable, and I cannot copy/paste the Hebrew into an otherwise left to right query expression.  I then tried to create a macro to do the evaluation, but I ran into the same issue.  Even using a different browser(Firefox vs. Brave), or a different program (notepad++), but I always encounter the cursor/keyboard anomalies after pasting the text into my query.   I need to translate a few different strings within a case eval expression.  Is anyone aware of any similar issues being encountered and/or of any potential work arounds?  Does someone have an alternate suggestion as to how I can accomplish the translations? Here is an example of what I am trying to do: | eval appName = case(appName="플레이어","player",appName="티빙","Tving",appName=... This Hebrew text is an example of where I run into issues: כאן ארכיון
Thx Jawahir007, unfortunately, we do not have OVOC (Audiocodes EMS),it will be interesting to find the dashboard spl code,but can't find it,   Thx      
Hi @Sangeeta_1 , please try this: | tstats count latest(_time) AS _time WHERE index=* BY host | table host -time Ciao. Giuseppe
Hi @arjun_ananth , let me know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @tsocyberoperati , in props.conf you can choose a source or an host to filter. If choosing the source, you can find with a regex the hostname in your logs you can solve your issue: e.g. if your... See more...
Hi @tsocyberoperati , in props.conf you can choose a source or an host to filter. If choosing the source, you can find with a regex the hostname in your logs you can solve your issue: e.g. if your source is "/opt/tmp/files/myfile.txt" and the host name is contained in the logs and it's "my_host", you could try: in props.conf [source::/opt/tmp/files/myfile.txt] TRANSFORMS-hostA = send_to_syslog in transforms.conf [send_to_syslog] REGEX = my_host DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group The only limit is that the hostname must be contained in all events.. Ciao. Giuseppe