All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What I would say is we would normally send all the data to both indexers (This is data load balancing , portions of the data get spread across the two indexers and you get better performance that way... See more...
What I would say is we would normally send all the data to both indexers (This is data load balancing , portions of the data get spread across the two indexers and you get better performance that way, this also has nothing to do with data clustering in the real sense - Best preactise. But it’s your choice at the end of the day. Heavy Forwarders are typically used for Add-ons – full Splunk instance, from there they send data to indexers. Best practice. You should also send the HF internal logs to the indexers – Best Practise. The SH connects to the indexers and is configured to sends its logs to the indexers, Best Practise. The below is an outputs.conf example  that sends to both indexers and sends the local Splunk internal logs , you can tune it to just send to one indexer if you want, just remove the second indexer from the group list and uncomment the specific indexer setting. Add this to your $SPLUNK_HOME/etc/system/local/outputs.conf on the SH - make the changes to reflect your environment names and test. If you’re using a custom separate app for outputs.conf then add to that. Restart the SH Splunk. You can do the same on a HF. NOTE:(The new internal indexes _ds*, so these need to be created on the indexer's if you are using the latest versions of Splunk) . NOTE: Ensure firewalls, ports, NTP have been configured and I'm assuming your not using TLS - thats another subject.    outputs.conf  Example [indexAndForward] index = false [tcpout] defaultGroup = <my_group_name_indexers> forwardedindex.filter.disable = true indexAndForward = false forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) [tcpout:my_group_name_indexers] #Remove the second indexer if you only want to send to one indexer server = mysplunk_indexer1:9997, mysplunk_indexer2:9997 #This is only for one indexer receiver #[tcpout-server://mysplunk_indexer1:9997]  
Looking to create a dashboard to allow users to lookup usernames, information, and groups within the Active Directory data. How do I create a search for this?
<main index> NOT [search <other source> NoHandlerFoundException | stats count by xCorrelationId | fields xCorrelationId | format] However, depending on how may exceptions you have, you may run into ... See more...
<main index> NOT [search <other source> NoHandlerFoundException | stats count by xCorrelationId | fields xCorrelationId | format] However, depending on how may exceptions you have, you may run into limitations as the sub-search with the format command will essentially return a long string which might be too large to be parsed in the main search. Another way to do it is to search both sources, and correlate by xCorrelationId and exclude those xCorrelationId's which have the exception, but this still means you are retrieving both full sets of events and correlating them before you can filter out any.
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer &... See more...
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer & Deploymentserver) Our normal Authentication for Web is currently with LDAP. With my LDAP-User i can directly perform a GET request to an Indexer, but with a local User created over WebUI (tried local user in SHC and on CM) i cant perform any request to an indexer.  The WebUI is disabled on the Indexers and they dont have the LDAP Configuration as the Searchheads does. How does it come, that the Indexer know my LDAP User but not the locally created? And how can i let the indexers to get to know a locally on SH or CM created user?
Hi @yuanliu , Here are my responses below. Please let me know if I missed anything. 1) What is your actual data structure? - It is in JSON format. userActions: [ [-] { [-] apdexCategory: FRUSTR... See more...
Hi @yuanliu , Here are my responses below. Please let me know if I missed anything. 1) What is your actual data structure? - It is in JSON format. userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0006 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 13175 domCompleteTime: 15421 domContentLoadedTime: 14261 domain: xxx doubleProperties: [ [+] ] duration: 15430 endTime: 1714043044710 firstInputDelay: 1 firstPartyBusyTime: null firstPartyResources: 0 frontendTime: 14578 internalApplicationId: APPLICATION-B9BADE8D75A35E32 internalKeyUserActionId: APPLICATION_METHOD-E3EF5284923A6BA3 javascriptErrorCount: 0 keyUserAction: true largestContentfulPaint: 15423 loadEventEnd: 15430 loadEventStart: 15430 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: loading of page /home navigationStart: 1714043029280 networkTime: 804 requestErrorCount: 0 requestStart: 630 responseEnd: 852 responseStart: 678 serverTime: 48 speedIndex: 13866 startTime: 1714043029280 stringProperties: [ [+] ] targetUrl: xxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 15416 } ] 2) Is it close to what I speculated? Further, you have never illustrated what is expected output. So, the two screenshots with only timestamps means nothing to volunteers. How can we help further? -Yes correct, it is relevant to the data what you predicted and here is the expected output _time                                      Application      Action                   Target_URL          Duration       User_Action_Type 2024-04-25 07:39:53     xxx                      loading /home    www.abc.com    0.26                Load 2024-04-25 06:25:50     abc                     loading /wcc/ui/ www.xyz.com   3.00                 Load 2024-04-24 19:00:57     xyz                     keypress  policy   www.bdc.com  3.00                 Load 2024-04-24 17:05:11     abc                    loading /home      www.xyz.com   0.53                 Xhr 2024-04-24 10:14:47     bcd                    loading /prod        www.rst.com     0.02                 Load 3) Specifically, WHY should the output NOT have multiple timestamps after deduping application, name and target url? - I could see multiple timesteps after dedup the application, name and target url. Here is the screenshot below and see the yellow highlighted.   4) In your original SPL, the only dedup is on x, which is Application+Action+Target_URL. How is this different?. My bad, here is the below correct SPL query that I tried. index="dynatrace" sourcetype="dynatrace:usersession" source=saas_prod events{}.application="participantmanagement.thehartford.com" userExperienceScore=FRUSTRATED | rename userActions{}.application as Application, userActions{}.name as Action, userActions{}.targetUrl as Target_URL, userActions{}.duration as Duration, userActions{}.type as User_Action_Type, userActions{}.apdexCategory as useractions_experience_score | eval x=mvzip(mvzip(Application,Action),Target_URL), y=mvzip(mvzip(Duration,User_Action_Type),useractions_experience_score) | mvexpand x | mvexpand y | eval x=split(x,","), y=split(y,",") | eval Application=mvindex(x,0), Action=mvindex(x,1), Target_URL=mvindex(x,2), Duration=mvindex(y,0), User_Action_Type=mvindex(y,1), useractions_experience_score=mvindex(y,2) | eval Duration_in_Mins=Duration/60000 | eval Duration_in_Mins=round(Duration_in_Mins,2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score | search useractions_experience_score=FRUSTRATED | sort - _time | dedup _time | search Application="*" | search Action="*" | fields - useractions_experience_score 5) Anything after mvexpand in my search is based on my reading of your intent based only on that complex SPL sample. Instead of making volunteers to read your mind, how about expressing the actual dataset, the result you are trying to get from the data, and the logic to derive the desired result and dataset in plain language (without SPL)? Here is the actual dataset below. 4/25/24 7:39:53.000 AM { [-] applicationType: WEB_APPLICATION bounce: true browserFamily: Microsoft Edge browserMajorVersion: Microsoft Edge 122 browserType: Desktop Browser clientType: Desktop Browser connectionType: UNKNOWN dateProperties: [ [+] ] displayResolution: HD doubleProperties: [ [+] ] duration: 15430 endReason: TIMEOUT endTime: 1714043044710 errors: [ [+] ] events: [ [-] { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /index.html pageGroup: /index.html startTime: 1714043029280 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /home pageGroup: /home pageReferrer: /index.html pageReferrerGroup: /index.html startTime: 1714043043405 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /employee/details pageGroup: /employee/details pageReferrer: /employee/coverage-list pageReferrerGroup: /employee/coverage-list startTime: 1714043088821 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /employee/coverage-list pageGroup: /employee/coverage-list pageReferrer: /employee/details pageReferrerGroup: /employee/details startTime: 1714043403199 type: PageChange } ] hasCrash: false hasError: false hasSessionReplay: false internalUserId: 17140430425327CQRENIQATV2OT5DV5BTJ2UB3MQF2ALH ip: 10.215.67.0 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] matchingConversionGoalsCount: 0 newUser: true numberOfRageClicks: 0 numberOfRageTaps: 0 osFamily: Windows osVersion: Windows 10 partNumber: 0 screenHeight: 720 screenOrientation: LANDSCAPE screenWidth: 1280 startTime: 1714043029280 stringProperties: [ [+] ] syntheticEvents: [ [+] ] tenantId: vhz76055 totalErrorCount: 0 totalLicenseCreditCount: 0 userActionCount: 1 userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0006 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 13175 domCompleteTime: 15421 domContentLoadedTime: 14261 domain: xxx doubleProperties: [ [+] ] duration: 15430 endTime: 1714043044710 firstInputDelay: 1 firstPartyBusyTime: null firstPartyResources: 0 frontendTime: 14578 internalApplicationId: APPLICATION-B9BADE8D75A35E32 internalKeyUserActionId: APPLICATION_METHOD-E3EF5284923A6BA3 javascriptErrorCount: 0 keyUserAction: true largestContentfulPaint: 15423 loadEventEnd: 15430 loadEventStart: 15430 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: loading of page /home navigationStart: 1714043029280 networkTime: 804 requestErrorCount: 0 requestStart: 630 responseEnd: 852 responseStart: 678 serverTime: 48 speedIndex: 13866 startTime: 1714043029280 stringProperties: [ [+] ] targetUrl: xxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 15416 } ] userExperienceScore: FRUSTRATED userSessionId: UEFUBRDAPRDHURTCPUKFKKPJVORTPPJA-0 userType: REAL_USER  
There are multiple methods to achieve this. However, lets first try it in a simpler way index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 |stats ... See more...
There are multiple methods to achieve this. However, lets first try it in a simpler way index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 |stats latest(status) AS status by host |eventstats values(status) as _status |eval OverallStatus=if(mvcount(_status) < 2 OR isnull(mvfind(_status,"Running")),"Down","Good") Steps - count the status values - If the count is less than 2  : meaning only one of the status from Running/Stopped is present - OR Running status is not available, we are setting the overall status as down. In this way, we can handle multiple situations where one of the server is down or both are reporting down or even both are reporting Running (active & passive) Demonstrated with a dummy search |makeresults|eval host="HostA",status="Running" |append[|makeresults|eval host="HostB",status="Stopped"] |stats latest(status) as status by host |eventstats values(status) as _status |eval OverallStatus=if(mvcount(_status) < 2 OR isnull(mvfind(_status,"Running")),"Down","Good") Try changing the status of HostA or HostB and see the results.  
Thanks for the clarification.  There are many places where a yellow triangle can appear so it was hard to know which you were seeing. I recommend ignoring the IOWait alert since it tends to be over-... See more...
Thanks for the clarification.  There are many places where a yellow triangle can appear so it was hard to know which you were seeing. I recommend ignoring the IOWait alert since it tends to be over-sensitive.  Tune the health check (Settings->Health Report Manager) so the alert appears less often.
Ok, lets check if the results are available by just removing below from the dashboard depends="$hide_this_always$"  We need to confirm the sid is available since its part of the download path.  So ... See more...
Ok, lets check if the results are available by just removing below from the dashboard depends="$hide_this_always$"  We need to confirm the sid is available since its part of the download path.  So either in the title of the panel or somewhere just display the token. If the result is available and sid is present, try using the URL directly in the browser to make sure that the result is fetched. Here is a sample dashboard created using the same logic and it works <dashboard version="1.1" theme="light"> <label>Download</label> <row> <!-- Below is the table with the results. We are setting the panel depends to a non existing token so that it always false the panel is not visible.--> <panel depends="$hide_always$"> <title>$sid$</title> <table> <search> <query>index=_*|stats count by sourcetype</query> <earliest>-15m</earliest> <latest>now</latest> <done> <eval token="date">strftime(now(), "%d-%m-%Y %H:%M:%S")</eval> <set token="sid">$job.sid$</set> </done> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <!-- Setting the title for testing purpose and making sure that the SID is available in the token --> <title>Job Id is : $sid$, Time is : $date$</title> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </panel> </row> </dashboard>
Hello, I have 500 HTTP messages in my access log. Also I have corresponding events from other log sources with the same correlation-id. Now I want to join the information to enhance the results.   ... See more...
Hello, I have 500 HTTP messages in my access log. Also I have corresponding events from other log sources with the same correlation-id. Now I want to join the information to enhance the results.   Access Log Events:   2024-04-25T11:00:26+00:00 [info] type=access status=500 xCorrelationId=90e2a321-f522-466f-9ffa-72cbdaa1a576 .... 2024-04-25T10:15:25+00:00 [info] type=access status=500 xCorrelationId=9b1833f5-776b-44c3-92d7-d603abdfecf8 ...   Other Events:   2024-04-25T10:15:24+00:00 xCorrelationId=9b1833f5-776b-44c3-92d7-d603abdfecf8 NoHandlerFoundException: No endpoint GET     My actual intention is, to exclude the results from main search, if there is another event with the same correlation-id but containing specific exceptions like "NoHandlerFoundException". That means, i need a search per result from the main search. Do you know a solution for this? Thanks!
It showing as green circle at the moment,  but it keeps flashing a warning  see screen shot below  
Clicking on the triangle should display explanatory text.  Share that text here if you need help understanding it.
@renjith_nair Thank You for the response. I keep getting check "network internet connection" when I click on download button and it is failing to download. I was able to download the report once but ... See more...
@renjith_nair Thank You for the response. I keep getting check "network internet connection" when I click on download button and it is failing to download. I was able to download the report once but later I keep getting this error? I know for a fact it is not internet issue because I able to download the other panels data directly when i click the default export button which is there in Splunk. Is it something related to my code?       <row depends="$hide_this_always$"> <panel> <table> <search> <done> <eval token="date">strftime(now(), "%d-%m-%Y")</eval> <set token="sid">$job.sid$</set> </done> <query>index=_internal</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;maxLines=0&amp;count=0&amp;filename=Vulnerability_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </panel> </row>    
Without giveout more information so we can help you (it's better to provide more context as to your issue, screen shots etc) that said it sounds like its related to risky commands. Maybe its to do w... See more...
Without giveout more information so we can help you (it's better to provide more context as to your issue, screen shots etc) that said it sounds like its related to risky commands. Maybe its to do with this. https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/SPLsafeguards
BTW I don't know if it's clear, but a) you should be able to find the checkpoint files on disk, but .. b) even if you don't, if you back up $SPLUNK_HOME/var/lib/modinputs I think you effectively ba... See more...
BTW I don't know if it's clear, but a) you should be able to find the checkpoint files on disk, but .. b) even if you don't, if you back up $SPLUNK_HOME/var/lib/modinputs I think you effectively back up your checkpoint files.  A bit of interwebs searching ought to confirm this. Also note that the checkpoint files are useless if you are trying to back them up pre-updating (at least if you cross the magical version near 3.10 where it switches from checkpoint files to KV store entries), because you can't slap them into place and expect it to find/use them any more.  It should migrate them during the upgrade, but I'm not sure it'll ever "re-migrate" later if you have to try to restore files into a kvstore based system.  YMMV, etc.
I'm regularly seeing a warning triangle appear, who to I search to fine our what is causing this 
I believe that this bug is planned to be fixed in 9.2.2
Can you paste the actual cron entry in here?  From your further description, my guess is that it's just wrong somehow (or at least that's one of a few problems). Also if this is still happening, hav... See more...
Can you paste the actual cron entry in here?  From your further description, my guess is that it's just wrong somehow (or at least that's one of a few problems). Also if this is still happening, have you tried the simple expedient of just *changing* the timings to make it come at the time you expect it to come?  I think if you take a careful and measured approach, changing one thing at a time and seeing what effect it has, you'll a) figure it out and b) also figure out *why* it's doing what it's doing.
It sounds like you've done pretty good basic troubleshooting already and confirmed that the data *should* be coming in. So it very well may be, but the reason you can't find it is because the time o... See more...
It sounds like you've done pretty good basic troubleshooting already and confirmed that the data *should* be coming in. So it very well may be, but the reason you can't find it is because the time on the device is off? Maybe it's a week or a day behind, or even worse it's set to next month.  You *could* try searching for its IP address over all time just to see if this is the case.  Maybe it's just that its timezone is mis- or unspecified, and it's showing up always from 4 hours ago so all searches running in timeframes closer to now than 4 hours ago are just missing it.  (E.g. "now" ends up being squirreled away in Splunk as X hours ago, so "last 4 hours" never shows it).   That's my guess, give that a think and a try and see what you find.   Happy Splunking, Rich  
The LINE_BREAKER setting requires a capture group.  The group is where events will be split.  Try this LINE_BREAKER = ()\w{3}\s\d\d:\d\d
It is unlikely that Splunk is adding them to the data it receives - what is your ingest path, i.e. how does the data get into Splunk and what configuration have you used along the way?