All Topics

Top

All Topics

Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it does... See more...
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it doesn't track the user that does an action. How could do this? Thank you for your help. Ciao. Giuseppe
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST A... See more...
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST API. Is there something special that i need to know about API calls? Via the UI -  the search works. Thanks!
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwi... See more...
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwing an error: " pp_tap_input: When trying to retrieve the last poll time, multiple kvstore records were found". We tried creating new input and observed the same behavior. 
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locke... See more...
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locked 1710320379,u09,unlocked 1710320384,u09,locked 1710320389,u10,unlocked 1710321119,u10,locked 1710321126,u11,unlocked 1710322754,u11,locked 1710322760,u09,unlocked 1710324580,u09,locked 1710326550,u09,unlocked 1710328364,u09,locked The first field - unix timestamp, second - user, third - action. I need to get a statistics for PCs beeing unlocked by users. So it will the sum of seconds between unlocked-locked actions for each user. Please, help with search query
We are having a problem with maintenance windows in Splunk IT Service Intelligence. We have a common service that two other services are dependent on, on top of those two there are other services de... See more...
We are having a problem with maintenance windows in Splunk IT Service Intelligence. We have a common service that two other services are dependent on, on top of those two there are other services dependent on them. Service a                                  Service b Service in maintenance     Service not in maintenance                              Common Service   With the current implementation in ITSI, we are forced to put "Service in maintenance" and "Common Service" in maintenance mode to avoid getting wrong healthscores in "Service a". This creates a problem for us, if an error occurs in "Common Service" during the maintenance window, as it won't reflect correctly in "Service not in maintenance", hence we will not be able to detect failures that affect our users. We tried raising a ticket, that correctly stated the this is works as designed and documented. We have an idea ITSIID-I-359, but so far it hasn't been upvoted. Kind regards
Hi Im trying to follow this tutorial https://splunkui.splunk.com/Create/ComponentTutorial and i have a problem when i start the demo. The steps that im following are: Navigate to an empty dir... See more...
Hi Im trying to follow this tutorial https://splunkui.splunk.com/Create/ComponentTutorial and i have a problem when i start the demo. The steps that im following are: Navigate to an empty directory of your choice and invoke Create: mkdir -p ~/Code/MyTodoList && cd ~/Code/MyTodoList npx @splunk/create (I choose A monorepo with a React Component) Run setup and start the component in demo mode yarn run setup cd packages/react-todo-list yarn run start:demo This bring me back the following errors: ERROR in ../../node_modules/@splunk/splunk-utils/url.js 11:19-41 Module not found: Error: Can't resolve 'querystring' in 'c SPLUNK\Code\MyTodoList\node_modules\@splunk\splunk-utils' BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default. This is no longer the case. Verify if you need this module and configure a polyfill for it. If you want to include a polyfill, you need to: - add a fallback 'resolve.fallback: { "querystring": require.resolve("querystring-es3") }' - install 'querystring-es3' If you don't want to include a polyfill, you can use an empty module like this: resolve.fallback: { "querystring": false } ow can i handle this? Thx in advance. node -v v20.11.1 npm -v 10.2.4 yarn -v 1.22.22 I did that: npm install querystring-es3 And this is the fallback on webpack.config.js: const path = require('path'); const { merge: webpackMerge } = require('webpack-merge'); const baseComponentConfig = require('@splunk/webpack-configs/component.config').default; module.exports = webpackMerge(baseComponentConfig, {     resolve: {         fallback: { "querystring": require.resolve("querystring-es3") }     },     entry: {         ReactTodoList: path.join(__dirname, 'src/ReactTodoList.jsx'),     },     output: {         path: path.join(__dirname),     } });   But the error is the same.
How to extract the two fields from the message ? In this need to extract after API: START: /v1/expense/extract/demand/ nagl as one field . demand _con.csv in another field I am extracting  |rex ... See more...
How to extract the two fields from the message ? In this need to extract after API: START: /v1/expense/extract/demand/ nagl as one field . demand _con.csv in another field I am extracting  |rex field=message max_match=0 "API: START: /v1/expense/extract/odemand/ (?<OnDemandFileName>[^\n]\w+\S+)"   API: START: /v1/expense/extract/demand/nagl/demand_con.csv    
When adding a Time Range Picker on Dashboard Studio the formatting for Date and Time range is month day year, how do I change this formatting to day month year?   How it shows: How I want it t... See more...
When adding a Time Range Picker on Dashboard Studio the formatting for Date and Time range is month day year, how do I change this formatting to day month year?   How it shows: How I want it to show:  
Is there feature which notifies the new release of Splunk version ? may be via email or subscribing a newsletter or something ?
I've below 3 different types of API logs where I've to treat all 3 as same and get the count of the API. There are multiple versions of same API along with or without user guid which is a unique v... See more...
I've below 3 different types of API logs where I've to treat all 3 as same and get the count of the API. There are multiple versions of same API along with or without user guid which is a unique value. "address":"http://test/services/user/v1/deleteUser/342ad-123m4-r43rm-144dgdg "address":"http://test/services/user/v2/deleteUser/delete/342ad-123m4-r43rm-144dgdg "address":"http://test/services/user/v2/deleteUser Looing for a regex which reads the API until the alphanumeric string starts. In short , if I do stats count by API it should give the count as 3.  
Hi guys,  Thank you in advance,  Is it possible to use a value of the search result as a parameter in the |sendmail from=" ? " In the | sendmail to="we can use results.mail_to" but in case o... See more...
Hi guys,  Thank you in advance,  Is it possible to use a value of the search result as a parameter in the |sendmail from=" ? " In the | sendmail to="we can use results.mail_to" but in case of | sendmail from="results.mail_from" don't work. We already disable the security options for this.   like for example  i ndex="main" | eval mail_from = "username@mail.com" | eval mail_to = "username@mail.com" | eval subject = "subject" | table username age country city | sendemail to=$result.mail_to$ from= $result_mail_from$ subject=$results.subject$ message="This is an example message" sendresults=true inline=true format=table sendcsv=true
Good Morning  i have a field that i've called problem_detail in our Helpdesk index. it contains all the types of problems that are logged to us. i would like to only merge those that are associated... See more...
Good Morning  i have a field that i've called problem_detail in our Helpdesk index. it contains all the types of problems that are logged to us. i would like to only merge those that are associated with email queries together. there are about 15 different ones.  index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, "&nbsp;"," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | stats count by problem_detail this spl is giving me the full list of 158 problem details and from there i can see around 15 of these relate to email.  Is there away i can combine the totals from all the problem_details that contain 'email' together.  i tried eval and then coalesce but it didnt work ..:(    thank you         
I'm trying to test Splunk Cloud, have registered for free trial but have not received any email so far from Splunk. Faced similar problem a few times. What do I do in this situation?
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the ... See more...
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the events based on the INTERFACE_NAME field and calculates the count of remaining unique events.   Splunk alert monitors the iSell application's request activity logs, specifically looking for cases where no data is processed within the last 30 minutes. If fewer than 2 unique events are found, the alert triggers once, notifying the appropriate parties.   From our end records are processed successfully and we provide the condition to trigger an INC count less than 2 we are getting more than one successful events even alert get triggering and getting INC Please check why we are getting false alert and suggest us index=*core host=* sourcetype=app_log APPNAME=iSell event=requestActivity httpMethod=POST loggerName="c.a.i.p.a.a.a.StreamingActor" | dedup INTERFACE_NAME| stats count
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analy... See more...
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analyze every day, is used by SOA and OSB diagnostic logs. It is, more or less, like csv structure but instead of tab/space/comma, each value is pakced into brakets Below example with the short descrption [2010-09-23T10:54:00.206-07:00] [soa_server1] [NOTIFICATION] [] [oracle.mds] [tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0] [APP: wsm-pm] "Metadata Services: Metadata archive (MAR) not found." Timestamp, originating: 2010-09-23T10:54:00.206-07:00 Organization ID: soa_server1 Message Type: NOTIFICATION Component ID: oracle.mds Thread ID: tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' User ID: userId: <anonymous> Execution Context ID: ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0 Supplemental Attribute: APP: wsm-pm Message Text: "Metadata Services: Metadata archive (MAR) not found." Any solution, hints how to manage it in Splunk ? regards KP.
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top... See more...
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top of the box .  How to remove this ?  I just want it to show the title OKTA and not the field name or whatever on top of the boxes   Below is my query for the above OKTA log source   | tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Thanks in advance
On splunk user is getting the following error:Could not load lookup=LOOKUP-pp_vms  but admin is not getting any such errors.   that look up file is not present also. What we need to do?
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Coul... See more...
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Could you please help me in fixing the issue. Error Details: --------------------- *-199.corp.apple.com] Configuration initialization for /ngs/app/splunkp/mounted_bundles/peer_8089/*_SHC took longer than expected (1145ms) when dispatching a search with search ID remote_sh-*-13.corp.apple.com_2320431658__232041658__search__RMD578320bc0a7e9dada_1709881516.707_378AAA09-A2C2-4B63-B88A-50A6B29A67DF. This usually indicates problems with underlying storage performance."
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specifi... See more...
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specific URLs and put them on a single line with the corresponding URL. Team1                User1                 URL1                    Count Team2                User4 Team3                User9 ------------------------------------------------------------------------ Team1                User3                 URL2                    Count Team4                User4                               User12                               User16                               User17 ------------------------------------------------------------------------ Team3                User1                 URL3                    Count Team6                User3 Team10              User12 ------------------------------------------------------------------------ Let me know if I need to clarify anything
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up th... See more...
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up the correct terminology. FlexVPN runs on IOS XE. I have logging configured the same as far as using logging trap informational (default) and noticed that we seem to not be getting a lot of data relating to the specifics with the tunnels, negotiations, etc., from a raw syslog perspective. What we would like to be able to do is monitor the tunnels so whenever a tunnel is brought up, taken down, or source (connection) IPs change. Possibly other things we haven't though of yet, hoping to encounter someone else who has used the same technologies and has something already built out. Thank you in advance.