All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ganji  1. Check splunkd.log  index=_internal source=*splunkd.log  VT4Splunk OR /opt/splunk/var/log/splunk/splunkd.log | grep -i "VT4Splunk" 2. Uninstall and reinstall the app
OrgID: <enter-orgid> Realm: <enter-realm> Instance Name: <instance-name> Request: Please securely open our Splunk Cloud Platform instance management port (8089) and add the IP addresses of the above ... See more...
OrgID: <enter-orgid> Realm: <enter-realm> Instance Name: <instance-name> Request: Please securely open our Splunk Cloud Platform instance management port (8089) and add the IP addresses of the above realm to our allow list so that we can enable Log Observer Connect. ref: https://docs.splunk.com/observability/en/logs/scp.html#support-ticket So here when im trying to connect log observer with splunk cloud(im in free trial) its telling me to open the managemnt port 8089 of the splunk cloud and for this we have to raise a case with splunk support IS this setup actually recquired?
@Fara7at08  The "Short ID" button might be missing due to changes in the interface or settings during the upgrade. According to the Upgrade Splunk Enterprise Security - Splunk Documentation  After ... See more...
@Fara7at08  The "Short ID" button might be missing due to changes in the interface or settings during the upgrade. According to the Upgrade Splunk Enterprise Security - Splunk Documentation  After upgrading to version 7.0.0 When you upgrade the Splunk Enterprise Security app to versions 7.0.0 or higher, the short IDs for notables that were created prior to the upgrade are not displayed on the Incident Review page. As a workaround, you can recreate all the short IDs that were available prior to the upgrade.
I'm building front end nodejs docker image, it need to install appd. My nodejs version is: v22 linux/amd64 appd version is: 24.12.0 but after the image built, then i try to start the server, then ... See more...
I'm building front end nodejs docker image, it need to install appd. My nodejs version is: v22 linux/amd64 appd version is: 24.12.0 but after the image built, then i try to start the server, then got below error: Appdynamics agent cannot be initialized due to Error: /appdynamics/node_modules/appdynamics-libagent-napi/appd_libagent.node: cannot open shared object file: No such file or directory Error:/node_modules/appdynamics/node_modules/appdynamics-libagent-napi/appd_libagent.node: cannot open shared object file: No such file or directory at Module._extensions..node (node:internal/modules/cjs/loader:1717:18) at Module.load (node:internal/modules/cjs/loader:1317:32) at Module._load (node:internal/modules/cjs/loader:1127:12) at TracingChannel.traceSync (node:diagnostics_channel:315:14) at wrapModuleLoad (node:internal/modules/cjs/loader:217:24) at Module.require (node:internal/modules/cjs/loader:1339:12) at require (node:internal/modules/helpers:125:16) at Module._compile (node:internal/modules/cjs/loader:1546:14) I checked and confirm the appd_libagent.node already under 
so looks like I was able to solve this issue but mapping "${user:groups}" to the role attribute and creating Splunk SAML groups with the name of the Group ID from the IdP group.
There is no need to test.  Splunk will only parse an event as JSON if the *entire* event is nothing but pure well-formed JSON.  It can't parse part of the event or extract a field and parse that.  Of... See more...
There is no need to test.  Splunk will only parse an event as JSON if the *entire* event is nothing but pure well-formed JSON.  It can't parse part of the event or extract a field and parse that.  Of course, you can do those things yourself in a query, but Splunk won't do it automatically.
Hi everyone, I am using IAM identity Center as the IdP for SAML auth. I have 2 groups in the IdP and 2 SAML groups in Splunk with differing roles and the groups in the IdP contain different users.  ... See more...
Hi everyone, I am using IAM identity Center as the IdP for SAML auth. I have 2 groups in the IdP and 2 SAML groups in Splunk with differing roles and the groups in the IdP contain different users.  My issue is I am unable to work out how to assign a user to a group in Splunk based on the users group in the IdP. I can hard set the role attribute to the group name or even both group names and this will result in all users receiving the referenced Splunk group's role regardless of what group they are assigned to in the IdP.    Does anyone know how to resolve this issue or if there is a user attribute for group?
When I open the configuration page, VT4Splunk app throws an error. I am on Splunk 9.2.2
Hi @Cheng2Ready  To implement the desired behavior for muting alerts following holidays based on your holiday dates, you can modify your Splunk query to handle the special case where the holiday fal... See more...
Hi @Cheng2Ready  To implement the desired behavior for muting alerts following holidays based on your holiday dates, you can modify your Splunk query to handle the special case where the holiday falls on a Friday. Here's a revised version of your query that checks for Friday holidays and adjusts the day to mute alerts: index=<search> | eval Date=strftime(_time, "%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output Holiday | eval should_alert = if(isnull(Holiday), "Yes", "No") | eval day_of_week = strftime(_time, "%A") // Get the day of the week | eval mute_date = if(day_of_week == "Friday", Date + 3*86400, Date + 86400) // Mute for Friday holidays | eval mute_alert = if(mute_date == Date, "No", should_alert) // Adjust mute based on the calculated mute date | table Date mute_alert | where mute_alert = "Yes" Explanation: Day of the Week Calculation: `strftime(_time, "%A")` retrieves the day of the week for the given date. Mute Date Calculation: The line: `eval mute_date = if(day_of_week == "Friday", Date + 3*86400, Date + 86400)` determines the mute date based on whether the holiday is on a Friday or another day. If it's Friday, it adds 3 days (including the weekend) to the mute date; otherwise, it adds only 1 day. Mute Alerts Logic: We then check if the current date matches the `mute_date`, setting `mute_alert` accordingly. Final Filtering: The `where` clause filters results to only keep entries where alerts should still fire, aligning with your requirements. This should successfully mute alerts on the day following any holiday based on the criteria you've established. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I have a Holiday.csv file that imports dates for specific holiday dates. example: 2024-04-01 2026-12-29 2028-06-26 I am working on muting alerts during a day after the dates. So, if the h... See more...
I have a Holiday.csv file that imports dates for specific holiday dates. example: 2024-04-01 2026-12-29 2028-06-26 I am working on muting alerts during a day after the dates. So, if the holiday was on Monday, it shouldn't fire on Tuesday, if the holiday was on Tuesday, it shouldn't fire on Weds, etc. The weird one is if the holiday is on a Friday, then we actually don't want the alert to fire on Monday this is what I have for my query.  just not sure how I would add in the Friday scenario if I did   strftime(_time+86400,"%Y-%m-%d")  ```to add one day```  index=<search> | eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output Holiday | eval should_alert=if((holidays.csv!="" AND isnull(Holiday)), "Yes", "No") | table Date should_alert | where should_alert="Yes" If something like this is possible in Splunk, I think it would work: if holiday is a Friday, add 3 days, otherwise add 1 day
Hi @eandres  In Splunk, when defining a lookup within transforms.conf, the time_field parameter is used to specify a field in the lookup table that represents a timestamp. This allows Splunk to appl... See more...
Hi @eandres  In Splunk, when defining a lookup within transforms.conf, the time_field parameter is used to specify a field in the lookup table that represents a timestamp. This allows Splunk to apply time-based filtering, ensuring that lookup results are relevant to the event’s timestamp How to Troubleshoot and Fix Verify the format of timestamps in your lookup file and ensure time_format matches. Check if your events fall within the expected time range of the lookup. Test the lookup manually using | inputlookup my_lookup to confirm that timestamps are stored correctly. Remove time_field if time-based filtering is not required. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
This is a time-based lookup, so if the _time in your event is not close enough to the time field in the lookup, it will not return a match.
That is unusual, Ive never had an issue including tokens as you suggested, the only thing I can think of is the underscores - although I have particular idea as to why that would cause an issue - Cou... See more...
That is unusual, Ive never had an issue including tokens as you suggested, the only thing I can think of is the underscores - although I have particular idea as to why that would cause an issue - Could you try changing the field name to remove underscores and check to see how it behaves after this? Just to clarify - when you run the search manually you get the "my_rule_title" field in the results, right? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We're seeing this error as well. Appears to have started after an upgrade from 9.1.6 to 9.2.4. Did you find any resolution or root cause? Does seem to be a bug as it started directly at upgrade time.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
I set my client test script to connect to 9997 on the IF server that is the and currently has ~1000 client connected to it, and it immediately fails to establish any more connections.  I have since b... See more...
I set my client test script to connect to 9997 on the IF server that is the and currently has ~1000 client connected to it, and it immediately fails to establish any more connections.  I have since built a new server that is using the exact same IF configurations and my client script connect to 9997 up to 16000 connections as expected.  I don’t understand this, I guess the splunk process with the 1000 connection knows there is data traversing and that it is too busy to accept more connections. In the metrics.log I see this ever minute: <date time> Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=1217, largest_size=1217,smallest_size=0 I don’t know what this mean or if it is significant and I will start researching this. I also am not sure what you mean by adding pipeline so I will look into that too.
I believe this has something to do with the lookup having time_field set in the transforms.conf. e.g. "time_field = d"
Yup @richgalloway ,  it worked now. I have set it to 1 and tested. Thanks
Running a lookup where I have verified the fields exist and match and its not returning an output field. So, I verified by running the lookup by itself and it still doesn't match. I have checked perm... See more...
Running a lookup where I have verified the fields exist and match and its not returning an output field. So, I verified by running the lookup by itself and it still doesn't match. I have checked permissions, ran the search from the app it belongs to. I can view the lookup with "| inputlookup <name>".   Example running the lookup on itself: | inputlookup myfile | table a, b | lookup myfile a OUTPUT b AS c | table a, b, c c always shows as empty for this one lookup
Hi. Thanks for your reply but input is firehose AWS and we don’t have inputs. Is it possible for you to review my props.conf and if you can test in any dummy environment