All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have Two Questions: 1st Questions: Below is the query to generate stats that I want to push into Summary Index: index="myIndex" host="myHost" source="/var/logs/events.log" sourcetype="ss:vv:eve... See more...
I have Two Questions: 1st Questions: Below is the query to generate stats that I want to push into Summary Index: index="myIndex" host="myHost" source="/var/logs/events.log" sourcetype="ss:vv:events" (MTHD="POST" OR MTHD="GET") | rex field=U "(?P[^\/]+)(\/([a-z0-9]{32})|$)" | search (ApiName=abc OR ApiName=xyz) | dedup CR,RE | stats count as TotalReq by ApiName, Status | xyseries ApiName Status, TotalReq | addtotals labelfield=ApiName col=t label="ColTotals" fieldname="RowTotals" It gives me perfect result as: ApiName | 200 | 400 | 404 | 500 | RowTotals abc | 12 | 2 | 4 | 1 | 19 xyz | 10 | 3 | 2 | 2 | 17 ColTotals | 22 | 5 | 6 | 3 | 36 But when I am changing stats to sistats to push into Summary Index, it is not producing any result, please help me with the query. 2nd Question: I already have a Summary Index available and one stats report with different query is already been pushed everyday, which I have annotated using Add Fields option in Edit Summary Index window as report = firstReport, now can I push another (above) report into same Summary Index with different annotation as report = secondReport? will it work or I have to create another Summary Index for this report also, Please help.
Background to this question I am the developer of a Splunk app, recently published on Splunkbase, that is intended for use as a sample, in the following sense: Scope and intended use of the a... See more...
Background to this question I am the developer of a Splunk app, recently published on Splunkbase, that is intended for use as a sample, in the following sense: Scope and intended use of the app [This] app is not intended to be a fully-fledged out-of-the-box solution [...]. Instead, the app contains sample dashboards that demonstrate some example use cases for visualizing data from [proprietary product name]. The developers of [this app] anticipate that customers will examine these sample dashboards, and then perhaps copy and adapt selected visualizations into their own bespoke Splunk apps to match their own specific requirements. A separate website (external to Splunkbase) supplies sample data for the app, so that users who want to try out the app can do so without requiring that "proprietary product". My problem I covet a "Splunk AppInspect Passed" badge for the app. However, the AppInspect report includes the following failure: [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_indexes_conf_does_not_exist Apps and add-ons should not create indexes. Indexes should only be defined by Splunk System Administrators to meet the data storage and retention needs of the installation. Consider using Tags or Source Types to identify data instead index location. File: default/indexes.conf The app contains an indexes.conf file that defines an index; the app's macros.conf file defines macros that refer to that index name; searches in the app's dashboards refer to those macros. I want users to store the sample data for this app in an index that is specifically for that purpose. I want them to be able to delete that index at will, without worrying about deleting other, "non-sample" data. I want to help inexperienced users avoid "polluting" indexes containing their "real" data with this "sample" data. User beware? That stance might be considered unhelpful to an inexperienced Splunk user who has just inadvertently loaded sample data into an index they shouldn't have. I anticipate that users will refer to this app as a starting point for developing their own apps that might or might not similarly constrain searches by index (I think that such constraints will be quite likely; for example, in multi-tenant environments). My question Is there any way I can get that "Splunk AppInspect Passed" badge without removing indexes.conf from the app? Yes, I could remove indexes.conf , push the task of defining a specific index onto the user, and describe how to do this in documentation, but I deliberately want to minimize the number of manual setup steps for this app.
Dear , I have cluster setup and we need to collect local logging logs from work station using WMI without install UF on targets so I need to know the pre-request .
Hi Splunk community, we have an Heavy Forwarder which mostly ingests syslog data via tcp. The volume of ingested data is rather constant. In order to check whether we could get could more perform... See more...
Hi Splunk community, we have an Heavy Forwarder which mostly ingests syslog data via tcp. The volume of ingested data is rather constant. In order to check whether we could get could more performance we disabled useACK in the outputs.conf of the HF. We were quite surprised that instead of increasing throughput by saving overhead, we observed a sharp drop in ingestion, including events from the internal logs of the Heavy Forwarder . We went back enabling acknowledgement after an hour. Can you give us some advice, that could explain this behvaiour? Could this be a hint, that there's a deeper problem in out Splunk infrastructure that is smoothed out by the acknowledgement feature? Thanks in advance! Kindly regards, Jens Wunder
Hi, I'd like to automate Splunk config files backup process for every 24 hours. Is there any Apps/Scripts available to achieve the same. Many thanks.
hello I use the search below in order to monitore the last reboot and the last logon date `LastLogonBoot` | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(Syst... See more...
hello I use the search below in order to monitore the last reboot and the last logon date `LastLogonBoot` | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysLogon=round((now() - LastLogon)/(3600*24), 0) | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastLogon=strftime(LastLogon, "%y-%m-%d %H:%M") | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | lookup test.csv HOSTNAME as host output SITE | stats values(LastReboot) as "Last reboot date" values(NbDaysReboot) as "Days without reboot" values(LastLogon) as "Last logon date" values(NbDaysLogon) as "Days without logon" by host SITE | rename host as Hostname, SITE as Site | sort -"Days without reboot" -"Days without logon" From this search, I have created an alert which is a litthe different because I match the date with a new index Thats the reason why I use a join command [|`tutu` earliest=-30d latest=now | lookup toto.csv NAME as AP_NAME OUTPUT Building | stats last(AP_NAME) as "Access point", last(Building) as "Geo building" by host | join host type=outer [|`LastLogonBoot` earliest=-30d latest=now | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | lookup test.csv HOSTNAME as host output SITE BUILDING_CODE DESCRIPTION_MODEL ROOM STATUS | stats last(LastReboot) as "Last reboot date", last(NbDaysReboot) as "Days without reboot", last(DESCRIPTION_MODEL) as Model, last(SITE) as Site, last(AP_NAME) as "Access point", last(BUILDING_CODE) as Building, last(ROOM) as Room, last(STATUS) as Status by host ] | search Site = titi | rename host as Hostname | table Hostname Model Status "Days without reboot" "Last reboot date" Site Building Room "Access point" "Geo building" | sort -"Days without reboot" My question is the following : When I execute the search, I have some events that doesnt exists in my alert even if they sholud exist How to explain that? Is it due to the join command?
Hi all, I am finding duplicate events during search operation. I am bit confused on where the issue is lies and how to start investigating on this. Regards,Shivanand
I have a dashboard for daily alerts, and I want to add a comment text box at extreme right of it for team to add comments. Splunk query: index=firewall (IP="10.10.10." OR IP="10.10.20." OR IP="1... See more...
I have a dashboard for daily alerts, and I want to add a comment text box at extreme right of it for team to add comments. Splunk query: index=firewall (IP="10.10.10." OR IP="10.10.20." OR IP="100.100.20.*") (Status=deny) | stats count(IP) As "Hits" by SrcIP, DstIP, Port, Status Results in table format: SrcIP DstIP Port Status Hits 10.10.10.1 10.10.10.2 80 deny 11 10.10.20.1 10.10.10.2 443 deny 45 I want to add a Comments text box to extreme right, so that the table will look like this: SrcIP DstIP Port Status Hits Comments 10.10.10.1 10.10.10.2 80 deny 11 10.10.20.1 10.10.10.2 443 deny 45 how to add this on splunk dashboard ?
From the Media tab while editing an app in Splunkbase: Add Screenshots Recommended screenshot size is 1200px by 900px When you click a screenshot thumbnail on the app page, Splunkbase pre... See more...
From the Media tab while editing an app in Splunkbase: Add Screenshots Recommended screenshot size is 1200px by 900px When you click a screenshot thumbnail on the app page, Splunkbase presents a larger version of the image in a popup. The constraints of that popup's CSS means that, in 1200px by 900px screenshots of app dashboards, text is unreadable (unless you've used "zoom" in your browser to enlarge the text; but that introduces its own issues). What resolution should I use to ensure app screenshots are readable in Splunkbase?
I try to login to the web console and get a 500 Internal Server Error. I've tried different solutions and they don't work. The error message 500 Internal Server Error The server encountere... See more...
I try to login to the web console and get a 500 Internal Server Error. I've tried different solutions and they don't work. The error message 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage. Solution that I've tried and have not corrected the issue. https://answers.splunk.com/answers/170065/why-am-i-getting-the-following-error-logging-into.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev https://answers.splunk.com/answers/425861/why-am-i-getting-a-500-error-switching-to-splunk-f.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
I have a data feed to Splunk that contains number, state and service name. This comes in to Splunk continuously as the state/service name changes. (number would stay the same as it is the key field... See more...
I have a data feed to Splunk that contains number, state and service name. This comes in to Splunk continuously as the state/service name changes. (number would stay the same as it is the key field) Regardless of the current service name at time of query, I would like to retrieve the latest "state" on data where the "number" has/had dv_u_service as "ODD CBJ PROD". Here's what I'm trying to achieve:- [ All data with dv_u_service="ODD CBJ PROD" ] + [ All data ] This is joined by the field "number" common in both searches. The index and the sourcetype for the two searches above are similar. I've achieved this using join but it's painstakingly slow. Is there a better way? (index=gbs_its_pds_infra_servicenow) (dv_u_service="ODD CBJ PROD") | eventstats latest(state) as latest_state by number | dedup number | table dv_u_service,assignment_group_name,latest_state,number | join left=L right=R type=inner max=1 where L.number=R.number [search (index=gbs_its_pds_infra_servicenow) | eventstats latest(state) as latest_state by number ] | table L.dv_u_service,L.assignment_group_name,L.latest_state,L.number,R.dv_u_service,R.latest_state,R.assignment_group_name Also, at the end of the query, how do I only show the results where L.dv_u_service<>R.dv_u_service? Would it be through eval?
Hi. I configured IMAP mailbox on a distributed setup. The setting is DeleteWhenDone =False and IMAPsearch = UNDELETED. This causes splunk to index the same email every script run. Is there a conf... See more...
Hi. I configured IMAP mailbox on a distributed setup. The setting is DeleteWhenDone =False and IMAPsearch = UNDELETED. This causes splunk to index the same email every script run. Is there a configuration that i can do for it not to download the same indexed email again? The requirement is not to delete the email from the server DeleteWhenDone = False. Regards, Ronald
Hi I am monitoring log file from one folder and giving host field name as hostname. ex. I am monitoring C:\Logs\GTA(Brazil).*zip and here my host name is "GTA(Brazil)" but after some days I ... See more...
Hi I am monitoring log file from one folder and giving host field name as hostname. ex. I am monitoring C:\Logs\GTA(Brazil).*zip and here my host name is "GTA(Brazil)" but after some days I changed my folder name to GSTA(Brazil) Now I want whenever I am searching with index and new host name, I want to see old data as well(including new hostname data) with hostname- GTA(Brazil). For mapping I am creating one lookup which contain below information- Name new_name GTTA(Brazil) GTA(Brazil) GTTA(Brazil) GSTA(Brazil) now how macro should be created which will take input as new host name/ old host name and give results with combination of both. or is there any other way? Note- the folder name can get changed any time and I can update lookup with new name but while searching for data with host i want to get all data present within folder. Thanks,
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTIO... See more...
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=1000' at line 1
Hello, I need to formulate a search there I have 2 date fields one is START_TIME 2020-02-28 19:19:58.0 other field is END_TIME 2020-03-03 19:19:58.0. What I need to do is find out is the START_TIME i... See more...
Hello, I need to formulate a search there I have 2 date fields one is START_TIME 2020-02-28 19:19:58.0 other field is END_TIME 2020-03-03 19:19:58.0. What I need to do is find out is the START_TIME is before the weekend and the END_TIME is after the weekend. And chart my results based on that including other fields. I only want results where the START_TIME is before the weekend and the END_TIME is after the weekend. Events where the START_TIME and the END_TIME and before or during the weekends can be excluded. For examples START_TIME is 2020-02-28 19:19:58.0 but END_TIME is 2020-02-2919:19:58.0 would not count as it was started and ended during the weekend. I only want events started before and ended after the weekend to count. Any help would be appreciated
Hello, I have the following where not query returning rows that exists in sub search, following is the query environment=test earliest=-48h latest=-24h index=iis_openapi /internal/loyalty/v1/ cs_u... See more...
Hello, I have the following where not query returning rows that exists in sub search, following is the query environment=test earliest=-48h latest=-24h index=iis_openapi /internal/loyalty/v1/ cs_uri_stem="registrations" cardid="*" WHERE NOT [ search earliest=-48h index=log-cdx-prod source=kubernetes sourcetype=_json "cardRegistered" "cardId" | rename cardNumber as cardid | fields cardid | format] | table cardid query says take cardid list from first query and return where cardid is not found in second sub search query, I am getting results where cardid is present in second query which is incorrect, condition is where not, any ideas what is going on here ?
Hi, I have given a query to return me a list of details as below , however the results for all of 30 days are not populating . Instead its giving only the results for last 3 days.. "http://pink... See more...
Hi, I have given a query to return me a list of details as below , however the results for all of 30 days are not populating . Instead its giving only the results for last 3 days.. "http://pinky/createcustomer" NOT "http:/pinky/confirmcustomer" | join type=left vsid [ search "http:/pinky/searchcustomer" ] | eval time=strftime(_time,"%a %B %d %Y %H:%M:%S.%N")| stats count(vsid) as TempcustomerCount list(email) as Email list(firstname) as FirstName list(lastname) as LastName list(JSESSIONID) as JSessionID list(time) as Time by customerCode,previewCode,vsid | where TempcustomerCount>=5
Hi, I'm trying to get the results based on recent field value. How to filter the events with the most recent scan date for all ip's? scan_date field values 03-01 02-22
I have disabled an alert , but even after that its sending results , Could you please help?
Encountered an issue with Splunk SAML authentication in conjunction when using scripted inputs for leveraging splunk cloud gateway for mobile. We have configured SAML with Azure AD for SSO with o... See more...
Encountered an issue with Splunk SAML authentication in conjunction when using scripted inputs for leveraging splunk cloud gateway for mobile. We have configured SAML with Azure AD for SSO with our existing SHC. As part of Splunk cloud gateway implementation, we performed few additional steps mentioned in the document which is recommending to include scripted inputs. https://docs.splunk.com/Documentation/Gateway/1.9.0/Installation/SAMLauth Enabled Token authentication on the SHC. Added azureScripted.py and commonAuth.py to the $SPLUNK_HOME/etc/auth/scripts directory. On the SH GUI under SAML configuration, configured few additional options under Authentication Extensions. Script Path: azureScripted.py Script timeout: 10s Get User Info time-to-live: 10s Script Functions: getUserInfo Script Secure Arguments: azureKey:XXXXXXXXX Post saving the above config, we started noticing issue with SSO auth. The very first SAML request works fine and the subsequent requests starts failing with 404 0r 500 response error on the browser. In Splunk internal logs, we observed below error after successful splunk SAML response Splunk Query: index=_internal sourcetype=splunkd OR sourcetype=splunkdconf SAML OR component=AuthenticationManager* | dedup event_message 03-03-2020 13:45:04.195 -0500 ERROR AuthenticationManagerSAML - authentication extension getUserInfo() failed for user: XXXX 03-03-2020 13:45:03.864 -0500 INFO AuthenticationManagerSAML - Calling getUserInfo() authentication extension for user: XXXX 03-03-2020 13:35:45.919 -0500 WARN Saml - Original response xml =[https://sts.windows.net/xxxxxxxx/https://sts.windows.net/xxxxxxxxxx/cYUrN24ezTvOMwLoKLKVtSTfkWdqCm6JGn71+xli2pg=IR8f70MSdjWFDZW34iR4Zz5SBzZb4xNznxMOE6wZ8QAqbUAsIeit6lt4a4PhS1UMI+xHWAabDptkaLUDI4yuPiiGtmQSMRbA2hb7GshE9JgXCnjxDRVeb4F/TX56PWf6klgp43Jzo1hSdNdsfnA1mYPkIEBZFeTMNLa28na7HBRStdA3SKXjqdcHfqJj9xrEleTgmY1Q71BF3PBFLsNpuMFlCx9eN44/ucSeq+KMDP+yd7HnL+R3eq57qhDB9W8c2vvf16iIblo4V72u5LNHH3Z0GcOtw1PGi25bhAPkRpKlakH60yiWyo+EG3PZyPNsthXAy8GdQPEsK+M7g06p9A==MIIC8DCCAdigAwIBAgIQWyiZvwFOJadJjIQF9L7k4TANBgkqhkiG9w0BAQsFADA0MTIwMAYDVQQDEylNaWNyb3NvZnQgQXp1cmUgRmVkZXJhdGVkIFNTTyBDZXJ0aWZpY2F0ZTAeFw0xOTExMjAxNjM4NTFaFw0yMjExMjAxNjM4NTFaMDQxMjAwBgNVBAMTKU1pY3Jvc29mdCBBenVyZSBGZWRlcmF0ZWQgU1NPIENlcnRpZmljYXRlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz9GMbCtuig3ai00TrgVNifrkQp/M4MJwyI8i457idgWEoiQ+8/sKpgBSIZgyO/3+IzPl1MwDe8gbaS1S0i/+FmpoJwBoBL60EuU39foh/KmvkyR7qAOSDK5o2Dme3RYmMAtE26cUY4sR7RmYGQkwsMMiv4iGr14Oyid4WAedOVa+ubGb4h7s6jMw/kdHjRHHPVNE/XNeteb5Aq62YSQqDL1xJMVFWA6jOsuvPAdVCyy072XC/9eCUqbBRQU5+6vuHKa27scoCjMBkxrTf9R/A9Kg7TOn+Nqp22CcYwTmkJ90N8Lh1RLXKS2uLnCbQlfrdZnuIQyd3FrVACdIUW96dwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCCjPTCr+AZcVfAsI8CEzxjxu7Jj8A4t57JPWwX89IiMkt69tN9DebKayEERq8FGp/AdE4AW3jHAQp26cDunPFIaeQ3QeahFRMUZZZzJGj9MJGSih1XQpTuh4bH0rhkZthBIcSfO9nUoo0/ihv4Nnpk4VUNPRrAkuT5qpEhp789culbwujIWy3/OuG7Xjj4FQZ2smcaznRY9BmLICGycI6ZqdAmUf4xxMGbepkDlHQkbrnlgSBpreqmEJvLgNgPSpmvt++vGuoqcv1nlTyC31jo7ltjGCTgp/O/AxBfzN7F3TS6VOJnkOyyF/hD1l+LPXpfDeTgHSfRzHaz6WbvYAm/XXXXXhttps://xxxxxxxxxxxxxxxxxd3813d6f-bf44-41ed-8c50-04e05d31c5dfXXXXXX, XXXXXXXXX-AllAssociates-LDC-D1XXX-EnterWebTech-u1XXX-Splunk-dbconnect-admin-U0Landesk Week 3mule-platform-adminXXX-ENTAndFMIT-ExtLeadership-D1XXX-AllAssociates-ATL-IT-D1XXX-MuleSoftProject-U1XXX-esmssev1cb-u1XXX-StatusPage-Allowed-U1SN-SEC-ITIL-IT-DXXX-DesignatedInsiders-U1Landesk Week DMZLandesk Week 2XXX-fmHubspanMonitor-u1XXX-ENTAndFMIT-ExtLeadership-U1FM-AllAssociates-IT-D1XXX-esmssev1all-u1XXX-Splunk-powerconnect-power-U0mule-qa-adminLandesk Week FM Prod 1100-IntuneUsers-U0XXX-AllMigratedUsers-D1FM-AllUsers-U1XXX-AnthemMembers-U1SN-AG-IT-FM-EBIZXXX-Splunk-tools-admin-U0XXX-ITAEO-Atlanta-U1XXX-msgfromjd-u1XXX-Splunk-tools-power-U0XXX-AllAssociates-LDC-IT-D1Landesk Week 4XXX-fmitmanagers-u1XXX-AllAssociates-U1XXX-Splunk-Readonly-U0XXX-fmwebsphere-u1mule-viewerXXX-Splunk-FM-ecom-SLT-U0XXX-esmssev2cti-u1mule-designerXXX-esmssev2all-u1XXX-ITArcEngOpr-U1XXX-fmwebsphereadmin-u1XXX-AllAssociates-US-exHI-D1XXX-ManagementTeam-IT-D1XXX-ManagementTeam-IT-U1XXX-ENTAndFMIT-All-D1Landesk Week 1XXX-allgscempatl-u1FM-AllManagersAndAbove-D1SN-AG-IT-FM-MANAGERSXXX-AnthemHSTMembers-U1XXX-ITAEO-Atlanta-D1XXX-ESMSSv1AllbutFM-u1XXX-ENTAndFMIT-LDC-D1XXX-AllMedicalEnrolled-U1XXX-ENTAndFMIT-AEO-D1-U1XXX-esmssev2gsc-u1XXX-VPNUsersGSC-ITCorporateApplications-U1XXX-esmssev1fm-u1XXX-All-IT-U1XXX-cmdbitcpusers-u1XXX-esmssev1can-u1FM-AllManagersofOthers-D1XXX-AllUsers-U1FM-All-IT-U1_HD Supply - AllXXX-Splunk-candelete-admin-U0XXX-ENTAndFMIT-LDC-U1100-Azure-App-XXX-Ava-U0XXX-esmssev2can-u1XXX-AllPeopleManagers-US-exHI-D1XXX-ENTAndFMIT-All-U1100-Azure-License-E3-D0XXX-AllPeopleManagers-D1FM-AllAssociates-D1XXX-Splunk-dbconnect-users-U0XXX-esmssev2fm-u1Landesk Week 5XXX-ENTAndFMIT-AEO-D1XXX-Splunk-Infosec-power-U0mule-dev-adminFM-AllAssociates-USA-D1XXX-AllAssociates-Salaried-D1XXX-gscoicdirects-u1XXX-esmssev1gsc-u1XXX-AllAssociates-USA-D1LANDesk - GSCXXX-esmssev2cb-u1FM-mule-operations-U0https://sts.windows.net/xxx-xxx-xx-xx/urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransporthttp://schemas.microsoft.com/claims/multipleauthnXXXXXXXXXXXX.XXXX@XXXXXXX.XXX@XXXX.comurn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport] Seems like the azureScripted.py is not able to obtaimn relevant token from the apikey and query azure graph endpoint for user impersonation. Need help with troubleshooting this issue and see if any users have successful implementation of. splunk cloudgateway with SAML authentication.