Activity Feed
- Karma Re: Issue with file csv monitoring for gcusello. 3 weeks ago
- Karma Re: Issue with file csv monitoring for vsommer. 3 weeks ago
- Karma Re: Issue with file csv monitoring for PickleRick. 3 weeks ago
- Karma Re: Issue with file csv monitoring for livehybrid. 3 weeks ago
- Posted Re: Issue with file csv monitoring on Getting Data In. 3 weeks ago
- Posted Re: Issue with file csv monitoring on Getting Data In. 3 weeks ago
- Posted Re: Issue with file csv monitoring on Getting Data In. 3 weeks ago
- Posted Issue with file csv monitoring on Getting Data In. 3 weeks ago
- Karma Buttercup Games: Further Dashboarding Techniques (Part 4) for ITWhisperer. 03-08-2025 12:34 AM
- Posted Re: kvstore issue on Splunk Enterprise. 03-08-2025 12:24 AM
- Posted kvstore issue on Splunk Enterprise. 03-08-2025 12:13 AM
- Karma Re: Issue in Splunk with the time brackets for livehybrid. 03-08-2025 12:13 AM
- Karma Re: Issue in Splunk with the time brackets for isoutamo. 03-08-2025 12:13 AM
- Tagged kvstore issue on Splunk Enterprise. 03-08-2025 12:13 AM
- Posted Re: Issue in Splunk with the time brackets on Splunk Search. 03-02-2025 07:50 PM
- Posted Issue in Splunk with the time brackets on Splunk Search. 02-28-2025 02:28 AM
- Posted Re: Search peer SSL config check- How to resolve these errors that popped up after upgrade? on Security. 02-26-2025 01:12 AM
- Posted Re: Unable to load images in splunk dashboard on Dashboards & Visualizations. 02-25-2025 09:28 AM
- Posted Re: Splunk error on Splunk Enterprise. 02-25-2025 09:26 AM
- Karma Re: Splunk error for kiran_panchavat. 02-25-2025 09:23 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
3 weeks ago
@PickleRick I have executed the command but nothing is visible relevant to my required starnza. FYI to you my current inputs setting. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\000000000*-*-SZC.VIT.BaptoEvents.*] whitelist = \.csv$ disabled = false index = Bapto initCrcLength = 256 sourcetype = SZC_BaptoEvent props.conf: [SZC_BaptoEvent] SHOULD_LINEMERGE = false #CHARSET = ISO-8859-1 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD = 23 TRANSFORMS-drop_header = remove_csv_header TZ = UTC transforms.conf [remove_csv_header] REGEX = ^Timestamp;AlarmId;SenderType;SenderId;Severity;CreationTime;ComplexEventType;ExtraInfo DEST_KEY = queue FORMAT = nullQueue Sample of csv files to be monitor: Timestamp;AlarmId;SenderType;SenderId;Severity;CreationTime;ComplexEventType;ExtraInfo 2025-03-27 12:40:12.152;1526;Mpg;Shuttle_115;Information;2025-03-27 12:40:12.152;TetrisPlanningDelay;TetrisId: TetrisReservation_16_260544_bqixLeVr,ShuttleId: Shuttle_115,FirstDelaySection: A24.16,FirstSection: A8.16,LastSection: A24.16 2025-03-27 12:40:12.152;1526;Mpg;Shuttle_115;Unknown;2025-03-27 12:40:12.152;TetrisPlanningDelay; 2025-03-27 12:40:14.074;0;Shuttle;Shuttle_027;Unknown;2025-03-27 12:40:14.074;NoError; 2025-03-27 12:40:16.056;0;Shuttle;Shuttle_051;Unknown;2025-03-27 12:40:16.056;NoError; 2025-03-27 12:40:30.076;0;Shuttle;Shuttle_119;Unknown;2025-03-27 12:40:30.076;NoError;
... View more
3 weeks ago
Hi @vsommer I have tried your suggested one but still no luck found. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\]
whitelist = \.csv$
... View more
3 weeks ago
Dear Splunkers!! I am facing an issue with Splunk file monitoring configuration. When I define the complete absolute path in the inputs.conf file, Splunk successfully monitors the files. Below are two examples of working stanza configurations: Working Configurations: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783979-2025-03-27T07-39-33-128Z-SZC.VIT.BaptoEvents.50301.csv] [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783446-2025-03-27T05-09-20-566Z-SZC.VIT.BaptoEvents.50296.csv] However, since more than 200 files are generated, specifying absolute paths for each file is not feasible. To automate this, I attempted to use a wildcard pattern in the stanza, as shown below: Non-Working Configuration: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\*.csv] Unfortunately, this approach does not ingest any files into Splunk. I would appreciate your guidance on resolving this issue. Looking forward to your insights.
... View more
Labels
- Labels:
-
monitor
03-08-2025
12:24 AM
Hey Will, @livehybrid, you’re even faster than GPT! 😄 We've already upgraded our RAM from 32GB to 64GB.
... View more
03-08-2025
12:13 AM
Hello Splunkers!! We are experiencing frequent KV Store crashes, which are causing all reports to stop functioning. The error message observed is: "[ReplBatcher] out of memory." This issue is significantly impacting our operations, as many critical reports rely on KV Store for data retrieval and processing. Please help me to get it fix. Thanks in advance!!
... View more
- Tags:
- other
Labels
- Labels:
-
installation
-
other
03-02-2025
07:50 PM
`search_on_index_time("`$input_macro$`", $span$)` | fields _time source id | bin _time AS earliest_time span=$span$ | eval latest_time=earliest_time+$span$ | stats values(id) AS ids, values(source) AS sources BY earliest_time latest_time | eval ids="\"".mvjoin(ids, "\",\"")."\"", sources="\"".mvjoin(sources, "\",\"")."\"" | `fillnull(value="", fields="earliest_time latest_time input_macro summarize_macro sources ids")` | map maxsearches=20000 search="search earliest=$earliest_time$ latest=$latest_time$ `$input_macro$(\"$sources$\",\"$ids$\")` | `$summarize_macro$($earliest_time$, $latest_time$)` | eval _time=$earliest_time$" | appendpipe [|where source="route" | collect index=$index$ source="route" | where false()] | appendpipe [|where source="system" | collect index=$index$ source="system" | where false()] I am using a macro in one of my saved searches and encountering the below error in Splunk. Based on the known issue, what changes should I make to the macro to resolve this error and eliminate the message? ERROR TimeParser [24352 SchedulerThread] - Invalid value "$latest_time$" for time term 'latest' @isoutamo @livehybrid
... View more
02-28-2025
02:28 AM
Hello Splunkers!! We recently migrated Splunk from version 8.1.1 to 9.1.1 and encountered the following errors: ERROR TimeParser [12568 SchedulerThread] - Invalid value "`bin" for time term 'latest' ERROR TimeParser [12568 SchedulerThread] - Invalid value "$info_max_time_2$" for time term 'latest' Upon reviewing the Splunk 9.1.1 release notes, I found that this issue is listed as a known bug. Has anyone observed and resolved this issue before? If you have implemented a fix, could you share the specific configuration changes or workarounds applied? Any insights on where to check (e.g., saved searches, scheduled reports, or specific configurations) would be greatly appreciated. Below is the screenshot of the known bug in 9.1.1 Thanks in advance for your help!
... View more
02-26-2025
01:12 AM
HI @gowthammahes , I am facing or getting the same warning messages in Splunk. Do I need to ignore this message or any workaround is available.
... View more
02-25-2025
09:28 AM
@livehybrid For your information. I have found the solution in Splunk known issues in 9.1.1 version and after applying ; it starts working fine.
... View more
02-25-2025
09:26 AM
@kiran_panchavat Thanks for your response. My concern is that it worked fine in Splunk Enterprise 8.1.1, but after upgrading to version 9.1.1, I am encountering fatal errors and “bad allocation” issues for the same scheduled search.
... View more
02-25-2025
02:06 AM
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operations, specifically while attempting to schedule reports from the dashboard using the noop command, we have encountered a "FATAL" error with the message indicating a "bad allocation." Server reported HTTP status=400 while getting mode=resultsb'\n\n \n bad allocation\n \n\n Please help me get it fix.
... View more
02-18-2025
08:22 AM
@livehybrid As I said in my query statement that when I am performing edit > source > save execution images are loading perfectly. Thats mean not an issue with the permission and requirement to change in web.conf. I am thinking the issue with the cache or drilldown.
... View more
02-18-2025
05:30 AM
@livehybrid For your information. I have changed the kvstore port from 8191 to 8192 and its start working properly since then.
... View more
02-18-2025
05:29 AM
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when I go to Edit > Source and then simply Save the dashboard again (without making any changes), the images start loading correctly. Why is this happening, and how can I permanently fix this issue without needing to manually edit and save the dashboard every time? Any insights or solutions would be greatly appreciated! Always getting below error. After performing Edit > Source action. Images are loading perfectly.
... View more
Labels
- Labels:
-
Classic dashboard
-
single value
02-10-2025
09:36 AM
I just removed complete kvstore folder from "/opt/splunk/var/lib/splunk/" after taking the backup and restart the splunk services.
... View more
02-10-2025
08:31 AM
Dear Splunkers!! Following the migration of our Splunk server from version 8.1.1 to 9.1.1, we have encountered persistent KV Store failures. The service terminates unexpectedly multiple times post-migration. Issue Summary: As a workaround, I renewed the server.pem certificate and rebuilt the MongoDB folder. This temporarily resolves the issue, and KV Store starts working as expected. However, the corruption reoccurs the following day, requiring the same manual interventions. Request for Permanent Resolution: I seek a permanent fix to prevent KV Store from repeatedly failing. Kindly provide insights into the root cause and recommend a robust solution to ensure KV Store stability post-migration. Looking forward to your expert guidance.
... View more
12-18-2024
12:09 AM
Hi @PickleRick, Do you any clue for fix with any other possible way workaround ?
... View more
12-17-2024
12:21 AM
@PickleRick As per the link you shared at first place. There mentioned that remove 'grandableRoles' =admin from the admin user from authorize.conf file. Is that workaround will work or shall I try ?
... View more
12-16-2024
11:24 PM
Hi @PickleRick , I am facing the similar issue. Exactly I need to do ? shall I change 'grandableRoles' =admin in autoritize.conf file ?
... View more
12-16-2024
11:06 PM
Hello Splunkers!! I have reassigned all the knowledge objects of 5 users to admin user. After that those users are not visible in user list when I am logging with "Admin User". Please help to identify the root cause and to fix this scenario. Thanks in advance.
... View more
12-08-2024
08:51 PM
@PickleRick I mean to say. The value of the TASKIDUPDATED field is always unique value after applying checkpoint value event should be ingested only once and not multiple times. Below is the setting I am currently using for db connect.
connection = VIn
disabled = 0
index = group_data
index_time_mode = current
interval = */10 * * * *
max_rows = 0
mode = rising
query = SELECT * FROM "WMCDB"."KLDGSF_ROUPOVERVIEW"\
WHERE TASKIDUPDATED < ?\
ORDER BY TASKIDUPDATED DESC
query_timeout = 30
sourcetype = overview_packgroup
tail_rising_column_init_ckpt_value = {"value":null,"columnType":null}
tail_rising_column_name = TASKIDUPDATED
tail_rising_column_number = 3
input_timestamp_column_number = 10
input_timestamp_format =
... View more
12-08-2024
06:32 AM
@PickleRick I am using field name "TASKIDUPDATED" which is the combination of TASKID and UPDATED column and it is always dynamic in nature. I have given this field in the rising column and this field is changing in every run. Even after this, duplicate data is being ingested.
... View more
12-06-2024
03:23 AM
My database contains two types of events, and I want to ensure that only the latest row for each unique TASKID is ingested into Splunk with the following requirements: Latest Status: Only the most recent status for each TASKID should be captured, determined by the UPDATED timestamp field. Latest Date: The row with the most recent UPDATED timestamp for each TASKID should be ingested into Splunk. Single Count: Each TASKID should appear only once in Splunk, with no duplicates or older rows included. Please help me achieve this requirement. Currently method I am using is "Rising column update" method. But still splunk is not ingesting a row with the latest status. I am using below query in SQL input under DB connect. SELECT * FROM "DB"."KSF_OVERVIEW" WHERE TASKIDUPDATED > ? ORDER BY TASKIDUPDATED ASC Below are the sample events from the database. =====Status "FINISHED" 2024-12-06 11:50:22.984, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 19:40:47", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 19:40:47", STATUSTEXTKEY="Dynamic|TaskStatus.key{FINISHED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{EXECUTED}.textKey", STATUS="FINISHED", CONTROLLERSTATUS="EXECUTED", REQUIREDFINISHTIME="2024-12-06 00:00:00", STATION="PAL/Pal02", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSU="340447278164799274", FMBARCODE="WMC000000000341785", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_FINISHED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", LCTRAINID="11935892717", MARSHALLINGAREA="WAB" =====Status "RELEASED" 2024-12-05 14:20:13.290, TASKID="11933815411", TASKLABEL="11933815411", TASKIDUPDATED="11933815411 2024/12/05 14:18:20", TASKTYPEKEY="PACKGROUP", CREATED="2024-12-05 14:18:18", UPDATED="2024-12-05 14:18:20", STATUSTEXTKEY="Dynamic|TaskStatus.key{RELEASED}.textKey", CONTROLLERSTATUSTEXTKEY="Dynamic|TaskControllerStatus.taskTypeKey{PACKGROUP},key{CREATED}.textKey", STATUS="RELEASED", CONTROLLERSTATUS="CREATED", REQUIREDFINISHTIME="2024-12-06 00:00:00", REQUIRESCUBING="0", REQUIRESQUALITYCONTROL="0", PICKINGSUBTASKCOUNT="40", TASKTYPETEXTKEY="Dynamic|TaskType.Key{PACKGROUP}.textKey", OPERATOR="1", MARSHALLINGTIME="2024-12-06 06:30:00", TSUTYPE="KKP", TOURNUMBER="2820007682", TYPE="DELIVERY", DELIVERYNUMBER="17620759", DELIVERYORDERNUMBER="3372948211", SVSSTATUS="DE_CREATED", STORENUMBER="0000002590", STACK="11933816382", POSITION="Bottom", MARSHALLINGAREA="WAB"
... View more
11-27-2024
02:03 AM
Hi @bowesmana @PickleRick just for both of your information. When I replaced endpoint /services/collector/event?auto_extract_timestamp=true with /services/collector/raw?auto_extract_timestamp=true, correct raw data format started coming and the timestamp also started matching . Example as below. Thanks both of your support and valuable suggestions.
... View more