All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) ... See more...
Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) Loaded action execution configuration Querying... Whois query failed. Error: HTTP lookup failed for https://rdap.arin.net/registry/ip/8.8.8.8. No action executions found.   I have configured proxy setting at GLBOBAL Environment level. how to fix this issue.  
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, ... See more...
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, and the login method.
I first tried exporting and importing the add-on after I moved to version 4.3.0 of the add-on builder. I then tried replacing the splunklib folders in the add-on builder and the app that is failing v... See more...
I first tried exporting and importing the add-on after I moved to version 4.3.0 of the add-on builder. I then tried replacing the splunklib folders in the add-on builder and the app that is failing validation with one from the latest sdk release. I also used "pip install splunk-sdk" and "python setup.py install" to attempt to update the sdk. None of the changes seem to have had any effect. 
Hello, How to display JSON tree structure in a summary index without output_mode=hec? I am not a Splunk admin. So, the only way I created summary index was using a Splunk report. I then enabled... See more...
Hello, How to display JSON tree structure in a summary index without output_mode=hec? I am not a Splunk admin. So, the only way I created summary index was using a Splunk report. I then enabled "Schedule Report" and "Summary Indexing". When the report ran, it appended the search query with the "| summaryindex" syntax. (See the screenshot below showing the steps). The summary index query is: index=summary   report=test_1  (the report field is to differentiate with the other users) I tried | collect index=summary   name=test_1   output_mode=hec, the result DID NOT show up in the summary index I tried  | collect index=summary marker="hostname=\"https://a1.test.com/\",report=\"test_1\"", the result DID show up in the summary index, but without JSON tree structure I tried  | collect index=summary marker="hostname=\"https://a1.test.com/\",report=\"test_1\""  output_mode=hec,  I received an "invalid argument". This is likely because marker parameter is not compatible with output_mode=hec. I believe only output_mode raw is allowed. However, I accidentally and successfully created a summary index and displayed it as a JSON tree structure by accident.  I don't know what I did. Please suggest. Thank you so much Step to create summary index 1) Created a Splunk Report, edited the search, and enabled schedule 2) Enabled summary indexing After the report Ran, it added | summaryindex syntax   Here's the Search query | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}"   The search result using "List"   When I clicked show syntax highlighted, it showed JSON tree structure Expected result      
Hi I have a 2 site architecture Site 1 - 2 indexers, 2 ES SH Site 2 - 2 indexers, 1ES SH All of them are in clusters.I wish to have 1 copy per site . What should be my RF and SF?  Can you also s... See more...
Hi I have a 2 site architecture Site 1 - 2 indexers, 2 ES SH Site 2 - 2 indexers, 1ES SH All of them are in clusters.I wish to have 1 copy per site . What should be my RF and SF?  Can you also suggest the min rf and sf configuration.   
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to Ser... See more...
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to ServiceNow group - works  2. Change the name of the ServiceNow group (name changes are common) The incident still routes to the correct group but when logging into the mapping settings, Splunk shows the old name and there is no option to edit it. You have to delete the mapping and start over. This is definitely not ideal. 3. Disable the group in ServiceNow to simulate a group that gets decommissioned.  The integration will continue to map to the inactive group which isn't ideal but I can see why that could happen. Again there is no option to modify the mapping or an ability to see that mapping is associated to an inactive group.  4. it doesn't appear as though Splunk will allow for a team to have no members but I wanted to confirm because I wanted to test sending an Incident from ServiceNow to see how Splunk would handle it if the group was mapped to empty team (Ex, all the team members leave the org) If there is no way to keep things in sync this product is going to be very difficult to manage over time and may not be the solution we are looking for. 
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permis... See more...
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permissions [] access = read : [ * ], write : [ * ] export = system The problem is that even me as admin can't delete dashboards and alerts. I tried to reassign them with knowledge objects to me, nothing works. What I can find in google, is that once everything is in default it's immune to deletion but I can't put it in local as this is also not allowed. Now I exported my app but there is already local dir as users changed some data, now I have to move everything from local to default so I can reupload it?? and what can I do then to be able to delete alerts? we have like 7k alerts and dashboards so it's a nightmare if I have to delete them manually from the conf file and reupload it again. Please, help!
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking ... See more...
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking at infrastructure at the bottom of the screen, I get no data as if I have no infrastructure configured.  What configuration am I missing to correlate the data between the two?   Clicking Infrastructure Under an Instrumented APM Service
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No"... See more...
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes" So I've been trying to create an complicated alert. unfortunately it failed, and is looking for guidance. The Alert is setup is supposed to fire if there are no results OR more than 1 unless it's the day after a weekend or holiday, in which case, this is to achieve the alert to look for 0 results OR  anything other than 1 I've added below the following: Trigger conditions: Number of results is not equal to 1 so when a date appears on the Muted date(holiday.csv) I want. turns out it had 0 events that day. and the 0 events/results triggered the alert and fired on Easter date. Also when we Mute a dates does it make it return 0 events? so technically it will still fire on the dates due to my trigger condition, how can we make sure it mutes on the holiday.csv lookup file , and yet alert on 0 events that are not on the holiday.csv
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud ... See more...
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud SH so that way i will be able to replicate the same knowledge objects . we are done with Data migration, now we need to move app and knowledge objects.
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and ... See more...
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server3 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server4 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB    Receiving blocked=true in metrics.log while syslog/heavy forwarder trying to send data through indexer servers. Due to this index ingestion is getting delayed and data is coming to Splunk 2-3 hours late.         And in one of the 5 indexer servers CPU is always highly utilized from 99-100% consistently which has 24 CPU, other indexer servers also running with 24 CPU.          Planning to upgrade highly utilized indexer server alone from 24 to 32         Kindly suggest by updating below in outputs.conf will reduce/stop the "blocked=true" in metrics.log and CPU load on indexer will be normal before upgrading the CPU.         OR we need to do both, changes in outputs.conf and upgrading the CPU. If both can be done which is the first we can try. Kindly help. autoLBFrequency = 5 maxQueueSize = 1000MB aggQueueSize = 7000 outputQueueSize = 7000
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best prac... See more...
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best practice onboard citrix VDI & WAF logs. much appreciated if you have solutions.  
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. On... See more...
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. Only delete option is there. Anyone any idea how to disable a user now or if this capability of splunk is removed what's the alternate.
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"K... See more...
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"Key2":23,"Key3":1,"Key4":1,"Key5":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_2","orignId":"test_originId_2","tenantId":"test_tenantId","violation_stats":{"Key1":1,"Key10":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_3","orignId":"test_originId_3","tenantId":"test_tenantId","violation_stats":{"Key6":1,"Key7":2,"Key8":1,"Key9":4},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_4","orignId":"test_originId_4","tenantId":"test_tenantId","lastModifier":"test_admin","rawEventType":"test_event"}   Now, I need to check how many records contain the violation_stats field and how many do not. I tried the below query, but it didn't work index="my_index" | search violation_stats{}=*   I checked online and got to know that I might need to use spath. However, since the keys inside the json are not static, I am not sure how I can use spath for my result.
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in ... See more...
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in raw format, i.e., message="The full message". However, when we check the Event under the Action tab, it only shows the first word of the message — "The" — which results in incorrect information being sent in alerts. Could someone please help us resolve this issue? I appreciate any help you can provide.
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk se... See more...
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk server, it overwrites the existing file. I don't have control over the content being placed on the FTP server, it could either be an entirely new entry or an existing entry with new data added, as shown below. I'm monitoring a specific file, as its name, type, and path remain consistent. From what I can observe, every time the file has new entries alongside previously indexed data, it is re-indexed, causing duplication. Example: file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 overwritten file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 2024-04-21 14:20 - row 3 Additionally, I checked the sha256sum of the JSON file after it’s pulled into my local Splunk server. The hash value changes before and after the file is overwritten. file.json: 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 /home/ws/logs/###.json overwritten file.json: 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd /home/ws/logs//###.json I've tried using initCrcLength, crcSalt, and followTail, but they don't seem to prevent the duplication, as Splunk still indexes it as new data. Any assistance would be appreciated, as I can't seem to prevent the duplication in indexing.
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which o... See more...
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which operates with a nearly identical configuration, does not exhibit this issue, and data ingestion occurs as expected. Steps Taken to Diagnose the Issue To identify the root cause of the delayed ingestion in Project B, the following checks were performed: Timezone Consistency: Verified that the timezone settings on the database server (source of the data) and the Splunk server are identical, ruling out timestamp misalignment. Props Configuration: Confirmed that the props.conf settings align with the event patterns, ensuring proper event parsing and processing. System Performance: Monitored CPU performance on the Splunk server and found no resource bottlenecks or excessive load. Note : Configuration Comparison: Conducted a thorough comparison of configurations between Project A and Project B, including inputs, outputs, and indexing settings, and found no apparent differences. Observations The issue is isolated to Project B, despite both projects sharing similar configurations and infrastructure. Project A processes data without delays, indicating that the Splunk environment and database connectivity are generally functional. Screenshot 1 : Screenshot 2 : Event sample : TIMESTAMP="2025-04-17T21:17:05.868000Z",SOURCE="TransportControllerManager_x.onStatusChangedTransferRequest",IDEVENT="1312670",EVENTTYPEKEY="TRFREQ_CANCELLED",INSTANCEID="210002100",OBJECTTYPE="TRANSFERREQUEST",OPERATOR="1",OPERATORID="1",TASKID="10030391534",TSULABEL="309360376000158328" props.conf [wmc_events] CHARSET=AUTO KV_MODE=AUTO SHOULD_LINEMERGE=false description= WMC events received from the Oracle database, formatted as key-value pairs pulldown_type=true TIME_PREFIX = ^TIMESTAMP= TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TZ = UTC NO_BINARY_CHECK = true TRUNCATE = 10000000 #MAX_EVENTS = 100000 ANNOTATE_PUNCT = false    
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am fac... See more...
Hi Folks, New to Splunk and SC4S deploymenet. So far I have been able to make good progress. I have setup 2 SC4S servers one on linux and the other on windows with WSL. The challenge that I am facing is that all the syslogs are doing to the default indices. For example I see that the FW logs are going to netfw. I am trying to move them to a new index that I have created- index_new. I have tried editing the splunk_metadata.csv file but I still see the logs going to netfw. i have tried different configurations but nothing worked.  fortinet_fortigate,index, index_new or ftnt_fortigate, index,index_new or  netfw,index,index_new In the HEC configuration, I have not selected any index and left it blank. The default index is set to index_new Thank you in advance. PS: I have also tried the Maciek Stopa's posfilter.conf script as well.
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help... See more...
Hi, I need recommendations on typo3 logs source type. Be default, I set source type as "typo3" in inputs.conf but logs are not parsed properly. I did not find any Splunk TA for typo3 that can help in parsing. Anyone have experience onboarding typo3 logs?  Thank you!  
I need to calculate time difference between start and end times. But I get the difference value as null. Not sure what I am missing. Below is the sample query | makeresults | eval a="27 Mar 2025,0... See more...
I need to calculate time difference between start and end times. But I get the difference value as null. Not sure what I am missing. Below is the sample query | makeresults | eval a="27 Mar 2025,02:14:11" | eval b="27 Mar 2025,03:14:12" | eval stime=strptime(a,"%d %b %Y,%H:%M:%S") | eval etime=strptime(b,"%d %b %Y,%H:%M:%S") | eval diff = eTime - sTime | table a b stime etime diff I get the below result with diff value empty: a b stime etime diff 27 Mar 2025,02:14:11 27 Mar 2025,03:14:12 1743041651.000000 1743045252.000000     Please help in identifying where I am going wrong