All Topics

Top

All Topics

Hi All, I referred the following https://community.splunk.com/t5/Dashboards-Visualizations/How-to-display-the-page-last-updated-time-in-the-dashboard/m-p/599698#M49203 to display the last refreshed/... See more...
Hi All, I referred the following https://community.splunk.com/t5/Dashboards-Visualizations/How-to-display-the-page-last-updated-time-in-the-dashboard/m-p/599698#M49203 to display the last refreshed/updated date-time on the dashboard. In my case, it uses my local system's timezone to display the result. I changed the timezone in Splunk from default to EST, but it still takes and displays time as per local timezone. Can anybody please share a way to display the date time for a specific timezone irrespective of the local timezone on which the dashboard is being accessed?  Thank you
Hello, When I run the splunk apply cluster-bundle command it seems to create a bundle for all apps in /$SPLUNK_HOME/etc/shcluster/apps to /$SPLUNK_HOME/var/run/splunk/deploy/apps. The last modified... See more...
Hello, When I run the splunk apply cluster-bundle command it seems to create a bundle for all apps in /$SPLUNK_HOME/etc/shcluster/apps to /$SPLUNK_HOME/var/run/splunk/deploy/apps. The last modified date on all app catalogs in /$SPLUNK_HOME/var/run/splunk/deploy/apps is always the date of when I last ran the apply shcluster bundle command. We have merge_to_default set, could that have something to do with all apps being pushed to the search head cluster regardless if there has been any change in the apps? How can I troubleshoot what the problem could be? If the apply command finishes it can take up to hours but usually it's something that times out.
I am calling a Stored Procedure in MS SQL using dbxquery:     The Stored Procedure is configured with:   EXECUTE @RC = [dbo].[getSomeData] @time ,@interval ,@retVal OUTPUT ,@retE... See more...
I am calling a Stored Procedure in MS SQL using dbxquery:     The Stored Procedure is configured with:   EXECUTE @RC = [dbo].[getSomeData] @time ,@interval ,@retVal OUTPUT ,@retErrorMessage OUTPUT GO   Am I able to access the values returned in the OUTPUT variables, i.e. retVal and retErrorMessage, from the stored procedure? All help and insight is appreciated. Thanks
Hello,  I am requested to make a study on the possibility to integrate Splunk authentication/authorization  with Cyberark PAM/PSM.  To get connected into Splunk, the users should go through PAM/P... See more...
Hello,  I am requested to make a study on the possibility to integrate Splunk authentication/authorization  with Cyberark PAM/PSM.  To get connected into Splunk, the users should go through PAM/PSM.   I could not find anything in the documentation nor in internet. Can you please tell me if this is or will be feasible?  
Hi, We can configure a heavy forwarder to send syslog data from Splunk to a third party. How do we this flow to use TLS with mutual authentication (client and server certificates)? Thanks, Gabriel
We are trying to integrate Microsoft SCCM v2.1.3  app with Splunk to get the patching information.  I need an SOP to follow the steps to integrate Microsoft SCCM with Splunk . please help me with thi... See more...
We are trying to integrate Microsoft SCCM v2.1.3  app with Splunk to get the patching information.  I need an SOP to follow the steps to integrate Microsoft SCCM with Splunk . please help me with this .
Hi all, I am using "Cisco Cloud Security Umbrella Addon for Splunk" to ingest the Data via API. https://splunkbase.splunk.com/app/5557/ Unfortunately the add-on does not include any CIM knowled... See more...
Hi all, I am using "Cisco Cloud Security Umbrella Addon for Splunk" to ingest the Data via API. https://splunkbase.splunk.com/app/5557/ Unfortunately the add-on does not include any CIM knowledge.  Can anyone tell me if there is a supported or working add-on for the CIM Mapping?   Thank you O.
Hi, is there a way to make a Splunk transaction wait until it has ended, before starting another transaction.   e.g. if I have (with latest results at the top) a end b start ... See more...
Hi, is there a way to make a Splunk transaction wait until it has ended, before starting another transaction.   e.g. if I have (with latest results at the top) a end b start c start d end e end f start g start h start   What I get from Splunk here would be transactions: f->e, g->d and b->a. But what I want is h->e and c->a, so once it's found "start" it then looks for "end", and then looks for the next "start" after that... etc.
Hi I have a string like below, how can I extract all key value between brackets (keys vary)? Arg[2]: NetworkPacket{trace='0'errCode=''dateTimeLocalTransaction='Mon May 30 00:00:00 IRDT 2022'dateT... See more...
Hi I have a string like below, how can I extract all key value between brackets (keys vary)? Arg[2]: NetworkPacket{trace='0'errCode=''dateTimeLocalTransaction='Mon May 30 00:00:00 IRDT 2022'dateTimeLocalTransactionTo='Mon May 30 23:59:59 USDT 2022'selectedTerminalTypes='[]'UDPApproveTermID='', dateEnd=null', referenceID='', selectedFlowTypeMaps=[]}   for above string out put like this: trace=0 errCode= dateTimeLocalTransaction=Mon May 30 00:00:00 USDT 2022 dateTimeLocalTransactionTo=Mon May 30 23:59:59 USDT 2022 selectedTerminalTypes= UDPApproveTermID= dateEnd=null referenceID= selectedFlowTypeMaps=   Thanks,
i am planning to upgrade splunk enterprise from V7.x to V8.2, do i need to go through V7.x -> V8.0 -> V8.2 ?  or  is it possible to go through V7.x -> V8.1 -> V8.2 instead of V8.0?     if anyth... See more...
i am planning to upgrade splunk enterprise from V7.x to V8.2, do i need to go through V7.x -> V8.0 -> V8.2 ?  or  is it possible to go through V7.x -> V8.1 -> V8.2 instead of V8.0?     if anything goes wrong i need to revert back,  is it necessary to go through V8.2 -> V8.0 -> V7.x? Or is it possible to switch back directly to V8.2 -> V7.x without going through V8.0?   Splunk documentation will be highly appreciated.  
Hello Splunkers!! Below is the search where we are comparing the last 3 hours vs 1 week ago data. How can we use dynamic token here? So when they select 2 hours it will compare 2 hours vs last 1 we... See more...
Hello Splunkers!! Below is the search where we are comparing the last 3 hours vs 1 week ago data. How can we use dynamic token here? So when they select 2 hours it will compare 2 hours vs last 1 week ago. How can we use token here in place of -3h  : ((earliest=@m-3h latest=@m) OR (earliest=@m-1w-3h latest=@m-1w)) index=ecomm_sfcc_prod sourcetype=sfcc_logs source="/mnt/webdav/*.log" "Order created successfully" ((earliest=@m-3h latest=@m) OR (earliest=@m-1w-3h latest=@m-1w)) | eval time=date_hour.":".date_minute | eval date=date_month.":".date_mday | chart count by time date
Hello,  Can someone pls guide how to extract a multi value field called "GroupName" from my JSON data via the Field extractor IFX.  The different values are seperated by ",\" as you can see in the ra... See more...
Hello,  Can someone pls guide how to extract a multi value field called "GroupName" from my JSON data via the Field extractor IFX.  The different values are seperated by ",\" as you can see in the raw events.  By default it only extracts the 1st value - . Raw events:   {"LogTimestamp": "Mon May 30 06:27:07 2022",[],"SAMLAttributes": "{\"FirstName\":[\"John\"],\"LastName\":[\"Doe\"],\"Email\":[\"John.doe@mycompany.com\"],\"DepartmentName\":[\"Group1-AVALON\"],\"GroupName\":[\"ZPA_Vendor_Azure_All\",\"Zscaler Proxy Users\",\"NewRelic_FullUser\",\"jira-users\",\"AWS-SSO-lstech-viewonly-users\",\"All Workers\"],\"userAccount\":[\"Full Time\"]     Regex generated by the IFX causes GroupName to have only 1 value: "ZPA_Vendor_Azure_All". I want it to display the other values also such as : Zscaler Proxy Users , NewRelic_FullUser , jira-users , AWS-SSO-lstech-viewonly-users, All Workers   . The end of the different values of GroupName field is just before the "userAccount" field. Hope i am clear
Hi I have table like below how can i show them on map? spl | table city count city  count الریاض 10 20 جدة مکة 33    thanks
Hello, I was trying to find out the correlation among Indexed Fields, Indexed Time Field Extraction, HF/UF, Deployment Server, and Performance. Do we need to have Indexed Time Field Extraction to c... See more...
Hello, I was trying to find out the correlation among Indexed Fields, Indexed Time Field Extraction, HF/UF, Deployment Server, and Performance. Do we need to have Indexed Time Field Extraction to create Indexed Fields? When we have the Indexed Time Field Extraction, do we have to have HF installed there, and does it have to be on deployment server? What would be the computational overload having the Indexed Time Field Extraction in compared to Search Time Field Extraction as SPLUNK highly recommend avoiding Indexed Time Field Extraction? Thank you so much for your thoughts and support in findings this correlation.
Hi everyone, I want to prevent warm buckets from becoming cold, not to disable it since it's mandatory to have coldPath. The reason is that since my Hot/Warm and Cold buckets are all on the same fast... See more...
Hi everyone, I want to prevent warm buckets from becoming cold, not to disable it since it's mandatory to have coldPath. The reason is that since my Hot/Warm and Cold buckets are all on the same fast storage, as well as I also need to define maxVolumeDataSizeMB for coldPath, I want to use up all my storage for homePath as much as possible. Here's an example of what I mean. Total Disk Space: 100GB homePath's maxVolumeDataSizeMB: 90GB coldPath's maxVolumeDataSizeMB: 10GB No frozenPath I want to configure the indexes not to move to Cold buckets as much as possible, so that I can reduce the coldPath configuration to be 1GB only, hence freeing up 9GB of space to allocate for the homePath of my other 30 indexers. From my Monitoring Console, I see that the coldPath is not used much so 9GB of all indexers add up to lots of space that are under-utilized. Based on this stats, I could set to 1GB today but it might suddenly increase one day, which leads to my question above, as I want to set it in a deterministic way. Any advice is appreciated.
  index="np-dockerlogs*" source="*gps-request-processor-dev*" sourcetype= "*eu-central-1*" event="*Request" | fields event category labelType documentType regenerate businessKey businessValue... See more...
  index="np-dockerlogs*" source="*gps-request-processor-dev*" sourcetype= "*eu-central-1*" event="*Request" | fields event category labelType documentType regenerate businessKey businessValue sourceNodeType sourceNodeCode geoCode jobId status sourcetype source traceID processingTime _time | eval LabelType=coalesce(labelType, documentType) | sort _time | table event LabelType sourceNodeCode geoCode status traceID processingTime Above query provide three record for each traceid which indicate for the respective traceid request was received request was success/failed total time taken by the request now from this data i want to produce below type of table   geoCode   sourceNodeCode   LabelType        event         totalreqreceived     successrate      avgProcessingTime EMEA           1067                           Blindilpn     synclabelrequest           1                              100%                     450                                                             taskstart     synclabelrequest           5                                98%                    1500                        1069                          ilpn                synclabelrequest           1                              100%                     420   NA                1068                          NIKE            synclabelrequest             1                              100%                     500                                                            cgrade        synclabelrequest            4                                95%                      2000                                                            NIKE            asynclabelrequest          1                               100%                     350 This table shows the 'total no of request received' , 'there success percentage' and 'average processingtime' for each 'event (either synclabelrequest or asynclabelrequest)'  from a list of 'labelType' belongs to a specific sourceNodeCode and geocode
Hello,  splunk fundamentals free training used to be at the link below , however; now am getting 404 error while trying to access this,  https://www.splunk.com/en_us/training/free-courses/splunk-... See more...
Hello,  splunk fundamentals free training used to be at the link below , however; now am getting 404 error while trying to access this,  https://www.splunk.com/en_us/training/free-courses/splunk-fundamentals-1.html   However; if you navigate to the free courses list here , it's pointing to the same page.  https://www.splunk.com/en_us/training/free-courses/overview.html?301=/en_us/training/free-courses/&locale=en_us   Is it just a broken link or this training is no longer available for free?      Thanks. 
Here is my situation. I can use subsearch to get two column data, just like below. Data row is not aligned, so I can't simply use eval if to compare. Some of the value is identical, but some is not.... See more...
Here is my situation. I can use subsearch to get two column data, just like below. Data row is not aligned, so I can't simply use eval if to compare. Some of the value is identical, but some is not. I want to output the value existing in col1, but not exist in col 2 column1 column 2 AA            BB CC           AA DD           FF EE            ZZ FF            XX VV          MM
Default range of Overall Service Health Score is: Critical;0-20 , High;20-40 , Medium;40-60 , Low;60-80 , Normal; 80-100 where in service analyzer it will show color: Critical;Red , High;Amber , Medi... See more...
Default range of Overall Service Health Score is: Critical;0-20 , High;20-40 , Medium;40-60 , Low;60-80 , Normal; 80-100 where in service analyzer it will show color: Critical;Red , High;Amber , Medium;Orange , Low;Yellow , Normal;Green. Basically, I want to change the default range where when the score is 89, it will be Low severity instead of Normal and the service node will show Yellow color instead of Green. Is it possible to change the default range?
Hi guys. Question: what's the best "maxKBps" settings in such Environment? 1Gbit LAN About 2000 Forwarders 6 Indexers I know, the correct answer does not exist, since may vary from server to ser... See more...
Hi guys. Question: what's the best "maxKBps" settings in such Environment? 1Gbit LAN About 2000 Forwarders 6 Indexers I know, the correct answer does not exist, since may vary from server to server, and from Env to Env, but there's surely a Best Practice to set this fundamental value, right? So, from months to now on i stay well with 0 value (no bandwidth limit), but sometimes i get a real Indexers stress while people load many many GB of logs (more than 1TB, for pregress analisyes), since Indexers receive so many datas to saturate their resources, so i need to force a maxKBps to 10240 ONLY for some servers to stay well. Now, is a 10240 value a right compromise for *ALL* Forwarders, maybe, to raise the value succesively after?