All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Team, The below screen shot in prod environment Splunlk App displaying app when ever select , but dev environment when ever select app not displaying , i had verified the permissions also visib... See more...
Hi  Team, The below screen shot in prod environment Splunlk App displaying app when ever select , but dev environment when ever select app not displaying , i had verified the permissions also visible,   please help me what is the exact issue.        
Hello,   I want to compare event counts for indexes to evaluate if there is unexpected changes in logging. In order to react in time I want those counts to be summed up from start of day until no... See more...
Hello,   I want to compare event counts for indexes to evaluate if there is unexpected changes in logging. In order to react in time I want those counts to be summed up from start of day until now() for each day of the last seven days, so I can directly see which amount I would expect for each day until e.g. 2pm. The tricky part for me is, how to sum those event counts for the days before today (up until the current time, but for e.g. yesterday). I managed to get some results showing the counts for each day until now() for each day. But i have no clue how to sum them per day.  Search:       |tstats prestats=t count WHERE index=<example> by _time span=1h | eval now=tonumber(strftime(now(),"%H")) | eval hour=strftime(_time, "%H") | where hour<=now | timechart span=1h count |table _time count         Result: The result makes me pretty hapy already, but the step to get a sum out of those counts per day somehow eludes me. many thanks for a hint in the right direction.
Hello, We have a lookup/kvstore containing over 3.M records*. We need to count the number of times each value is found over all of the records. Ex: Count the occurrence of the same LAST_NAME Fi... See more...
Hello, We have a lookup/kvstore containing over 3.M records*. We need to count the number of times each value is found over all of the records. Ex: Count the occurrence of the same LAST_NAME Field Name: LAST_NAME Values: JONES, SMITH, DAVIS, GARCIA Counters Values: 12, 34, 16, 23 This is just one of several different counters: BIRTH_YEAR, CITY, STATE, etc. Because of the limits within Splunk, this code would result in blanks and inaccurate counts.   | eventstats count(ID) as count_same_city by CITY   Any suggestions? * The number of records increases by 10K every week. Thanks in advance, and God bless, Genesius
Hi  I am referring below table for example,  I want add CSS for both values in Office column in the table.   Name Position Office Age Airi Satou Accountant Tokyo 33 A... See more...
Hi  I am referring below table for example,  I want add CSS for both values in Office column in the table.   Name Position Office Age Airi Satou Accountant Tokyo 33 Angelica Ramos Chief Executive Officer London 47 Ashton Cox Junior Technical Author San Francisco 66 Bradley Greer Software Engineer London 41 Brenden Wagner Software Engineer San Francisco 28 Brielle Williamson Integration Specialist New York 61 Bruno Nash Software Engineer London 38 Caesar Vance Pre-Sales Support New York 21 Cara Stevens Sales Assistant New York 46 Cedric Kelly Senior Javascript Developer Edinburgh 22   I want highlight both values Tokyo and San Francisco. I have added script it will work but problem is its will work one value only. It will highlight San Francisco only. I have add script as follows      require( [ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc, TableView) { function cssLoad(tableName, field_name, field_val) { var CustomLinkRender = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _([field_name]).contains(cell.field); }, render: function ($td, cell) { var cell_value = cell.value; if (cell.field == field_name) { if (cell_value == field_val.trim()) { $td.css('color', '#1717E6'); $td.css('font-weight', 'bold'); $td.css('text-decoration', 'underline'); $td.css('text-decoration-color', 'blue'); } } $td.text(cell_value).addClass('string'); } }); var selectedTable = mvc.Components.get(tableName); if (typeof (selectedTable) != "undefined") { selectedTable.getVisualization(function (tableView) { tableView.addCellRenderer(new CustomLinkRender()); tableView.render(); }); } } //Single table Hardcode call cssLoad('table1', 'Position', 'Software Engineer'); cssLoad('table1', 'Age', '66'); cssLoad('table1', 'Office', 'Tokyo'); cssLoad('table1', 'Office', 'New York') });        Please Help me!.  For highlight multiple values. 
Good day, We have an issue where when we try to setup email notifications with our email server with Splunk, no emails will send to the respective email addresses. We tried adding the adding the in... See more...
Good day, We have an issue where when we try to setup email notifications with our email server with Splunk, no emails will send to the respective email addresses. We tried adding the adding the information: "ipaddress:port" to the mail host section but no luck. We do not require a username and password. We also tried following the instructions from the documentation here: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Sendemail There was no luck there either. Is there an alternative way to setup email notifications through SSH or something else? This has to satisfy a STIG requirement. Please let me know if I need to provide additional information to the forum. Thank you
My installer is in the otp path, along with other splunk installer [root@siem-security opt]# ls splunk splunk-8.0.0-1357bef0a7f6-linux-2.6-x86_64.rpm splunk-enterprise-security_530.spl splunk-7.... See more...
My installer is in the otp path, along with other splunk installer [root@siem-security opt]# ls splunk splunk-8.0.0-1357bef0a7f6-linux-2.6-x86_64.rpm splunk-enterprise-security_530.spl splunk-7.2.6-c0bf0f679ce9-linux-2.6-x86_64.rpm splunk-8.2.7-2e1fca123028-linux-2.6-x86_64.rpm splunkforwarder-7.2.6-c0bf0f679ce9-linux-2.6-x86_64.rpm  I allready have splunk installed with version 8.0.0 and when I run rpm -ivh splunk-8.2.7-2e1fca123028-linux-2.6-x86_64.rpm I get the following error with many package files, this is only part of the error splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/layout/admin_lite.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/layout/view.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/layout/wizard.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/lib.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/licensing/overview.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/licensing/usage.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64 splunk-8.2.7-2e1fca123028.x86_64 installation file /opt/splunk/share/splunk/search_mrsparkle/templates/pages/static.html conflicts with package file splunk-8.0.0-1357bef0a7f6.x86_64  
index=idx_rdap source="*f5*" "*member*" "RO1B4-0JLSM4000S" "/Common/pool_d2i_*gkrgkl" | rex field=member "\/Common\/(?<server>[^:]*)" | stats latest(status) as Last_status, values(pool), as pool_name... See more...
index=idx_rdap source="*f5*" "*member*" "RO1B4-0JLSM4000S" "/Common/pool_d2i_*gkrgkl" | rex field=member "\/Common\/(?<server>[^:]*)" | stats latest(status) as Last_status, values(pool), as pool_name, values(_time) as _time by pool, server | stats values(Last_status) as Last_status, values(server) as server by pool | eval severity=case(server="XC001X02" AND server="XC001X03" AND Last_status="down", "1", server="XC001X03" AND Last_status="down", "3", server="XC001X02" AND Last_status="down", "3",true(),"0" ) | table pool, Last_status, severity, server | eval hour_of_the_starttime=strftime(_time, "%H") | eval support_group=if(hour_of_the_starttime>=19 OR hour_of_the_starttime<7,"WW-XX-TFORMEGI-L3", "WW-XX-MSEGI-L2") | eval ressource="GKR-GkL-I:" + pool | eval service_offring="GKR-GkL-I" | eval description="VIP GMS got a service status down \n \nDetail : One or more legs Impacted service on :" + pool + "\n On server(s): " + tostring(server) + " \n\n\n\n; " + support_group + " ;KB=KB00000" | table ressource description pool service_offring severity server support_group
Hi Community, For some reason, the extension does not work anymore since I've upgraded to tlsv1.3. Does anyone know if they're compatible or maybe some config adjustments are necessary to collect... See more...
Hi Community, For some reason, the extension does not work anymore since I've upgraded to tlsv1.3. Does anyone know if they're compatible or maybe some config adjustments are necessary to collect metrics? Thanks
Hi everyone, I have the following issue: within a search and a data field I find values like this: db2_stat = "1,3:8" db2_stat = "2,5:7" My issue now is this should be translated into a comma ... See more...
Hi everyone, I have the following issue: within a search and a data field I find values like this: db2_stat = "1,3:8" db2_stat = "2,5:7" My issue now is this should be translated into a comma separated list of all values starting with the value on the left side of the colon, ending with the value on the right side. In other words: the resulting data field should look like this: db2_stat_xlated = "1,3,4,5,6,7,8" db2_stat_xlated = "2,5,6,7" I thought, I'd write a macro that calls recursively until the start value reaches end value. But whatever I've tried I ended up with the message  "Error in 'SearchParser': Reached maximum recursion depth (100) while expanding macros. Check for infinitely recursive macro definitions.". Last version of the macro code | eval st_v = $start_v$, ed_v = $end_v$, value_list = $val$ | eval nx_v = st_v + 1 | eval value_list = case(st_v < ed_v, value_list . st_v . "," . `GEN_VALUE_LIST(nx_v, ed_v, value_list)`,                          st_v == ed_v, value_list . st_v, 1==1, value_list) The macro definition GEN_VALUE_LIST(3) with these arguments start_v, end_v, val   Query to test | makeresults `GEN_VALUE_LIST(3,6,"1,")` | table *   Although I'm keen to understand SPLUNK's issue with it (code transferred to perl language works) I'd mostly appreciate a working solution beyond defining all possible list values in a lookup file   Many thanks in advance, Ekke
i am using splunk cloud and need to about splunk status page  in that there are multiple services are there while opening the link https://status.scp.splunk.com/ can anyone let me know the descri... See more...
i am using splunk cloud and need to about splunk status page  in that there are multiple services are there while opening the link https://status.scp.splunk.com/ can anyone let me know the description about those services and impact if not working 
Hello, I encounter a bug in exporting a panel in my dashboard, on my end, it's just open a new tab with nothing in it, but on my customer's side, it's show How can I go about this bug?
Hi, I am trying to implement a dynamic input dropdown using a query in the dashboard studio. The code I am using is as follows:         "input_H05frgOO": {             "options": {        ... See more...
Hi, I am trying to implement a dynamic input dropdown using a query in the dashboard studio. The code I am using is as follows:         "input_H05frgOO": {             "options": {                 "items": [],                 "token": "host_token",                 "defaultValue": ""             },             "title": "HOST",             "type": "input.dropdown",             "dataSources": {                 "primary": "ds_ljNWYr7J"             }         } And below is the data sources:         "ds_ljNWYr7J": {             "type": "ds.search",             "options": {                 "query": "| mstats avg(\"mx.process.cpu.utilization\") as X WHERE index=\"murex_metrics\" span=10s BY \"mx.env\" | dedup mx.env | table mx.env"             },             "name": "search_6"         } The input dropdown says "waiting for input". Could you please help me with the issue?   Regards, Pravin
Hi, I'm configuring SSL in a test environment on version 8.2.6 of Splunk Enterprise before upgrading to Splunk 9.0.0.  I have managed to encrypt traffic between my Splunk servers, however, I am n... See more...
Hi, I'm configuring SSL in a test environment on version 8.2.6 of Splunk Enterprise before upgrading to Splunk 9.0.0.  I have managed to encrypt traffic between my Splunk servers, however, I am now unable forward data to my Indexers as they're are refusing connections from my Forwarders. Do I have to have a certificate on all of my Forwarders to make use of SSL/TLS? I'm trying to avoid the overhead of having to manage certificates on all of my  servers that I have in the Production environment. Thanks. Mike.
Hi Team, How to get OCI logging option under Data Inputs?
I scheduled a search to run at 0 2,8,14,20 * * *  The timezone of the search head is UTC.  Therefore I expect the next run tiem to be 2am UTC, yet Splunk says the next run time would be 6am UTC.  H... See more...
I scheduled a search to run at 0 2,8,14,20 * * *  The timezone of the search head is UTC.  Therefore I expect the next run tiem to be 2am UTC, yet Splunk says the next run time would be 6am UTC.  How could this be? And where is this configured? I suspect there is a setting somewhere which is making the cron expressions be interpreted in US Eastern Time. Since we are observing Daylight Savings Time, Eastern Daylight Time would be UTC-4. The documentation (Use cron expressions for alert scheduling - Splunk Documentation) says "The Splunk cron analyzer defaults to the timezone where the search head is configured. This can be verified or changed by going to Settings > Searches, reports, and alerts > Scheduled time." I find no "Scheduled Time" under Settings > Search, reports and alerts. I did post this to the feedback on that documentation page in case it is actually inaccurate. Where can I check and verify?   Thanks!
Hi Splunkers, This may be easy, but I'm not able to solve it, if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard d... See more...
Hi Splunkers, This may be easy, but I'm not able to solve it, if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard deviation above the mean, but I'm not sure how to implement that.  Thanks!  So this is what I have:  index=X sourcetype=Y source=metrics.kv_log appln_name IN ("FEED_FILE_ROUTE", "FEED_INGEST_ROUTE") this_hour="*" | bin span=1h _time | stats latest(this-hour) AS Volume BY appln_name, _time | eval day_of_week=strftime(_time,"%A"), hour=strftime(_time,"%H") | lookup mt_expected_processed_volume.csv name as appln_name, day_of_week, hour outputnew avg_volume, stdev_volume
Received the below error after answering "y" to Perform migration and upgrade without previewing configuration changes? [y/n] y.   -- Migration information is being logged to '/opt/splunk/var/log... See more...
Received the below error after answering "y" to Perform migration and upgrade without previewing configuration changes? [y/n] y.   -- Migration information is being logged to '/opt/splunk/var/log/splunk/migration.log.2022-07-06.23-37-40' -- Migrating to: VERSION=9.0.0 BUILD=6818ac46f2ec PRODUCT=splunk PLATFORM=Linux-x86_64 Copying '/opt/splunk/etc/myinstall/splunkd.xml' to '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak'. An unforeseen error occurred: Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1359, in <module> sys.exit(main(sys.argv)) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1212, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1067, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 35, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3158, in autoMigrate comm.copyItem(PATH_SPLUNKD_XML, PATH_SPLUNKD_XML_BAK, dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli_common.py", line 1086, in copyItem shutil.copy(src, dst) File "/opt/splunk/lib/python3.7/shutil.py", line 248, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/opt/splunk/lib/python3.7/shutil.py", line 121, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak'
Can Splunk DBConnect use the SQL WITH statement?   WITH TABLE_BASE AS ( -- this section is the base query and matches the Smart reporting logic SELECT DISTINCT The WITH command is not highlighte... See more...
Can Splunk DBConnect use the SQL WITH statement?   WITH TABLE_BASE AS ( -- this section is the base query and matches the Smart reporting logic SELECT DISTINCT The WITH command is not highlighted in red as the other commands.  
I need help with loading CSV files into Splunk with the event time recorded as seconds past midnight instead of HH:MM:SS time. Below is a sample of the data I need to load. How do I specify that the ... See more...
I need help with loading CSV files into Splunk with the event time recorded as seconds past midnight instead of HH:MM:SS time. Below is a sample of the data I need to load. How do I specify that the time column is the number of seconds past midnight when defining the Timestamp for the Source Type? PickStartDate,BTVersion,TripNumber,Sequence,PassingTime,ArrivalTime,DepartureTime,FlagStop,ByPass,EarlyDeparture,event_line_number 2021-04-25,S1000216,1020,1,54900,54900.0,54900.0,0,0,,1 2021-04-25,S1000216,1020,2,54955,,,0,0,,2 2021-04-25,S1000216,1020,3,54999,,,0,0,,3
So I need to move a deployer to a dedicated host. I have a 3 member shc on version 8.1.3, all healthy. I have read a number of posts that give similar answers (to the answer I received for my origi... See more...
So I need to move a deployer to a dedicated host. I have a 3 member shc on version 8.1.3, all healthy. I have read a number of posts that give similar answers (to the answer I received for my original post), such as > "copy over the /opt/splunk/etc/shcluster to the new deployer" "configure the new deployer (to use the cluster's secret key and to set the SHC label), move the configuration bundle from the old deployer to the new deployer, and then point the cluster members to the new deployer" "migrate the shcluster folder structure and any shclustering stanza configurations you have on the deployer to the new deployer" " also break the SHC and rebuild with new deployer info"   While all these answers make sense, IDK exactly know what to reconfigure/change so I read > https://docs.splunk.com/Documentation/Splunk/8.1.3/DistSearch/BackuprestoreSHC And that takes you down the rabbit hole of backups and restores that don't entirely seem necessary, so I am wondering if anyone can verify the minimum changes that need to be made? OR if I should follow the above link's instructions.   As I understand it, I just need to do the following>>> 1) Build new Deployer ( new IP, new FQDN), install Splunk... 2) configure with the [shclustering] stanza in /opt/splunk/etc/system/local  server.conf [shclustering] pass4SymmKey =<secret> shcluster_label = <name> 3) on each SHC member,  edit the [shclustering] stanza [shclustering] conf_deploy_fetch_url = https://<newIP>:8089 make sure "pass4SymmKey" and "shcluster_label" is same as on new Deployer 4) copy over the /opt/splunk/etc/shcluster to the new deployer 5) Restart everything  Does that seem right?  I don't have the luxury of a dev environment to test...  Do I need to put shc members in detention or stop splunk on everything before I make the changes? Any advice is appreciated. Thank you