All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running RHEL 7 server, and noticed that my splunk forwarder client is not reporting in. I am running iptables. Here are the rules that I've added: -A INPUT -p tcp -m tcp --dport 8089 -j ACC... See more...
I am running RHEL 7 server, and noticed that my splunk forwarder client is not reporting in. I am running iptables. Here are the rules that I've added: -A INPUT -p tcp -m tcp --dport 8089 -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 8089 -j ACCEPT
Hi I want to remove everything after a some characters like ? OR & when they come in a field. For example - /temp/test?csrkyyt=12334 /test1/test2&csrkyyt=7968676 Can someone help?
The percentage of non high priority searches skipped (50%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this per... See more...
The percentage of non high priority searches skipped (50%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=2. Total skipped Searches=1
Since we upgrades our UF to v7.2.9, we are seeing lots of application crash errors in the application event log on our hosts. This is happening on large volumes of hosts. Initially I thought it may b... See more...
Since we upgrades our UF to v7.2.9, we are seeing lots of application crash errors in the application event log on our hosts. This is happening on large volumes of hosts. Initially I thought it may be a specific counter, but it occurs when the Splunk-Perfmon.exe process is running, even if no perfmon collection is occurring. I don't see any errors in Splunk itself and the Splunk-Perfmon process itself keeps running and sending data. Looking into these errors, there seems to be some suggestion this is related to "data execution prevention" which is blocking Splunk trying to run code in data memory (error include code c0000005 which is an access denied error) , but I have not been able to confirm this. servers previously running v6 did not show this error, only when upgraded did the error start to appear. example error below SourceName=Windows Error Reporting EventCode=1001 EventType=4 Type=Information ComputerName=xxxxxxxxxxxx TaskCategory=The operation completed successfully. OpCode=Info RecordNumber=230239 Keywords=Classic Message=Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: splunk-perfmon.exe P2: 1794.2305.24028.63924 P3: 5ddcfc22 P4: splunk-perfmon.exe P5: 1794.2305.24028.63924 P6: 5ddcfc22 P7: c0000005 P8: 00000000005bc5d8 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_splunk-perfmon.e_2f9ed6fb118b57ac0e734f67ff573c73ad1654a_64da0b14_48835327
Hi All, I have some alerts configured to email as the only action. Alert.track (Add to Triggered Alerts) is not enabled. I've tried to find evidence anywhere in Splunk logs that these alerts have... See more...
Hi All, I have some alerts configured to email as the only action. Alert.track (Add to Triggered Alerts) is not enabled. I've tried to find evidence anywhere in Splunk logs that these alerts have fired, however, I'm unable. I've tried looking in : | rest splunk_server=local /servicesNS/-/-/saved/searches as well as: index=_audit action=alert_fired ss_app=myapp However, neither appear to show evidence that the alert has fired (Yes, I receive the email). It's possible I'm just missing it or it is logged in another area? Thank you.
Hello, a Universal Forwarder (7.0.1) is watches an textfile. The parameter are following: [default] host = RBD9EUFN [monitor://C:\ProgramData\Cognex\In-Sight\Splunk\Log_Cam] index = rbg_ff1_s... See more...
Hello, a Universal Forwarder (7.0.1) is watches an textfile. The parameter are following: [default] host = RBD9EUFN [monitor://C:\ProgramData\Cognex\In-Sight\Splunk\Log_Cam] index = rbg_ff1_stand_allone_ant2 sourcetype = rbg_ff1_stand_allone_ant2_sourcetype crcSalt = <SOURCE> followTail = 1 The strange thing is, the sourcetype name changes itself! Why?
Hello I am trying to get a regex to work in splunk but without success, perhaps someone here can help me? This work when I am testing (not in splunk) ", logdata="Process Code: -400 Process... See more...
Hello I am trying to get a regex to work in splunk but without success, perhaps someone here can help me? This work when I am testing (not in splunk) ", logdata="Process Code: -400 Process Message: [0]:ABC QRT 12764 NOT. <-PurchaseOrderLineId- HeadId:1415640 Division:10 Id:1-1>", marowver="7" But how do I get an expression that works in Splunk? I want the string in bold: ABC QRT 12764 NOT
Hello, Using a horizontal filler gauge in one of our d/b panels. The panel width is set at 100% using inline CSS. However, is only using ~25% of the pane;l. How do I "stretch" the gauge to use th... See more...
Hello, Using a horizontal filler gauge in one of our d/b panels. The panel width is set at 100% using inline CSS. However, is only using ~25% of the pane;l. How do I "stretch" the gauge to use the entire panel width? Also, the panel shows a horizontal scroll bar. How do I remove this and constrict the height of this panel. Thanks and God bless, Genesius
Hello, Is it possible to renew the trial license with re-installing splunk? The expired license name is Splunk Enterprise Splunk Analytics for Hadoop Download Trial How can we renew it?
We are not able to access splunk web, receiving below error in web_service.log, please let me know if any one has an idea 2020-04-20 10:12:59,669 ERROR [5e9d75ab947f40c9ad4f10] root:769 - Unable... See more...
We are not able to access splunk web, receiving below error in web_service.log, please let me know if any one has an idea 2020-04-20 10:12:59,669 ERROR [5e9d75ab947f40c9ad4f10] root:769 - Unable to start splunkweb 2020-04-20 10:12:59,669 ERROR [5e9d75ab947f40c9ad4f10] root:770 - No module named 'UserDict' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 132, in from splunk.appserver.mrsparkle.controllers.top import TopController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/top.py", line 27, in from splunk.appserver.mrsparkle.controllers.admin import AdminController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 25, in from splunk.appserver.mrsparkle.controllers.appinstall import AppInstallController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/appinstall.py", line 22, in from splunk.appserver.mrsparkle.lib import module File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 465, in moduleMapper = ModuleMapper() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 83, in init self.installedModules = self.getInstalledModules() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 28, in helper return f(*a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 448, in getInstalledModules mods = self.getModuleList(root) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 37, in helper return f(*a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/module.py", line 223, in getModuleList mod = import(modname) File "/opt/splunk/etc/apps/SA-VMNetAppUtils/appserver/modules/SOLNTreeNav/SOLNTreeNav.py", line 5, in import UserDict ModuleNotFoundError: No module named 'UserDict'
Hi, Now i have scheduled an Alert to send an email with 5 columns in the body of the email as well as in the csv attached. But the 5th column is very huge. So i dont want to show that column in t... See more...
Hi, Now i have scheduled an Alert to send an email with 5 columns in the body of the email as well as in the csv attached. But the 5th column is very huge. So i dont want to show that column in the body of the email rather just include it in the csv attached. How is that possible ? what are the options ?
Hello I'm getting logs from my customer and the timestamp there is day and month without year. that case causes splunk to index the events with future dates example: 12/31/20 11:59:59.00... See more...
Hello I'm getting logs from my customer and the timestamp there is day and month without year. that case causes splunk to index the events with future dates example: 12/31/20 11:59:59.000 PM Dec 31 23:59:59 csm kerne how can i modify the date before indexing so i will see the correct timestamp ? thanks
Some facts beforehand DB View Setup We are in Timezone Europe/Zurich, that is CEST+0200 including daytime savings. Since the switch to daytime savings on 29. March, we're having issues with the t... See more...
Some facts beforehand DB View Setup We are in Timezone Europe/Zurich, that is CEST+0200 including daytime savings. Since the switch to daytime savings on 29. March, we're having issues with the time conversion on our Splunk DB connect inputs. We've never configured specific timezone on the db connect app or on the inputs. Normally it just worked fine. We have a multiple DB views, from which we read and then index into Splunk. This particual View has these four columns: EVENT_ID NUMBER LOG_LEVEL NUMBER EVENT_TIME TIMESTAMP EVENT CLOB It is a follow tail input, rising column set to EVENT_ID. The timestamp ist extracted from the EVENT_TIME column. The events are not updated, hence it's safe to index them with the follow tail method. DB Input Setup The input's SQL Query looks like this: SELECT * FROM (SELECT * FROM "ONL"."DBG_EVENT_LOG" WHERE EVENT_TIME >= sysdate-1) t WHERE EVENT_ID > ? ORDER BY EVENT_ID ASC WHERE EVENT_TIME >= sysdate-1 is there to improve query search time performance as it is a really big DB view. The EVENT column contains several lines that belong together. Each line starts with another timestamp. When splunk indexes the events, it writes all columns prefixed by a timestamp, which I assume is the timestamp from EVENT_TIME. This gets us events in Splunk with following structure: <timestamp>, EVENT_ID="<EVENT_ID>", EVENT_TIME="<EVENT_TIME>", EVENT="<EVENT>" For some reason it omits LOG_LEVEL, but this is no problem at all. Splunk Query for Analysing the Issue I have following Splunk query to compare the timestamps to each other: |search <base search for raw events> |eval indextime=strftime(_indextime, "%c") |eval time=strftime(_time, "%c") |rex field=_raw "(?<firstdate>\S+\s\S+)," |table time, EVENT_TIME, firstdate, indextime |sort by indextime DESC The |rex command just rexes the timestamp which prefixes all indexed events into Splunk field firstdate . I called it <timestamp> in the previous event structure. Timestamps So we have these four timestamps: _time as time _indextime as indextime EVENT_TIME (incorporated into the splunk event) firstdate (incorporated into the splunk event) Now on to the issues... Issue with no Timezones Set at all I haven't set any timezone configuration at all. I let Splunk automatically decide, what timezone the servers run on and how the events should be converted. So I get following timestamps: time EVENT_TIME firstdate indextime Mon Apr 20 13:01:04 2020 2020-04-20 12:01:04.382576 2020-04-20 12:01:04.382 Mon Apr 20 12:01:30 2020 Mon Apr 20 13:01:04 2020 2020-04-20 12:01:04.369736 2020-04-20 12:01:04.369 Mon Apr 20 12:01:30 2020 Splunk indexes the event chronically at the wrong time, see difference of 1 hour between time and indextime. Issue with all Timezones Set to Europe/Zurich Now to try with explicitly set timezones, I configured the following: Splunk starts with environment variable TZ="Europe/Zurich" set (in the systemd .service manifesto) Added option -Duser.timezone="Europe/Zurich" via GUI in Splunk DB Connect App -> Configuration -> Settings -> General to both Task Server JVM Options and Query Server JVM Options. Set DB Input Timezone to Europe/Zurich : +02:00 via GUI in Splunk DB Connect App -> Configuration -> Databases -> Connections -> Selecting the input and setting Timezone When searching for events now, I get following differing timestamps: time EVENT_TIME firstdate indextime Mon Apr 20 12:17:41 2020 2020-04-20 12:17:41.679789 2020-04-20 11:17:41.679 Mon Apr 20 12:18:02 2020 Mon Apr 20 12:17:41 2020 2020-04-20 12:17:41.599967 2020-04-20 11:17:41.599 Mon Apr 20 12:18:02 2020 Either firstdate misses the daytime savings configuration and hence is only +01:00, or it actually has the daytime savings but is indexes as UTC, hence again only +01:00. Questions I'd be happy with either solution to one of these questions: How to remove that prefixed timestamp? How to disable timezone conversion for that timestamp? Some Specs Splunk 7.3.3 (build 7af3758d0d5e) DBX 3.1.4 and 3.3.0 (after upgrade) ojdbc7.jar 12.1.0.2.0 java8 jdk1.8.0_202 OS Linux RHEL7.6 Sorry for the long post, I just wanted to provide all possibly relevant information.
Hello! I'm quite new to Splunk and I'm just wondering if there's a way to check on a server resource percentage? This is my setup: I have three servers running, (1) Splunk Enterprise, (2) Splunk ... See more...
Hello! I'm quite new to Splunk and I'm just wondering if there's a way to check on a server resource percentage? This is my setup: I have three servers running, (1) Splunk Enterprise, (2) Splunk Universal Forwarder & (3) Another Universal Forwarder. All servers are running via AWS. Inside my (2) & (3) servers reside some microservices that I need for my website. I've installed a Splunk App for Infrastructure and Splunk Add-on for infrastructure, and everything works fine except that all I can monitor is the servers itself (cpu usage, up time, idle time, etc), not the microservices that reside each server. Is there a way or an add-on that I can also use to monitor the number of memory that my microservice consume in each server? Or if it's running or not running anymore? Thank you in advance!
We want to parse highly nested jsons into expanded tables. We found that the following code works, given we apply the | rename . as _ as many times as deep the nesting is. Without replacing the "." ... See more...
We want to parse highly nested jsons into expanded tables. We found that the following code works, given we apply the | rename . as _ as many times as deep the nesting is. Without replacing the "." Splunk does not make all fields and subfields available. Might there be a more generic solution? index="adm_compute_qcheck" | rename *.* as *_* | rename *.* as *_* | rename *.* as *_* | rename *.* as *_* | rename *_{}* as ** | rename *{}_* as ** | rename *{}_* as ** Here the first part of the JSON: [ { "BIOS": { "manufacturer": "INSYDE Corp.", "SystemBiosMajorVersion": 0, "SystemBiosMinorVersion": 41, "SMBIOSBIOSVersion": "0.99" } }, { "Checkpoint": { } }, { "ClusterName": null }, { "CPUType": { "NumberOfCores": 16 } }, { "HBA": [ { "active": true, "drivername": "elxfc", "driverversion": "12.2.207.0", "firmwareversion": "11.4.204.25", "optionromversion": "11.4.204.25", "manufacturer": "Emulex Corporation", "model": "LPe32002-AP", "serialnumber": "FC83980875" }, { "active": true, "drivername": "elxfc", "driverversion": "12.2.207.0", "firmwareversion": "11.4.204.25", "optionromversion": "11.4.204.25", "manufacturer": "Emulex Corporation", "model": "LPe32002-AP", "serialnumber": "FC83980875" } ] }, { "HPE": [ ] }, { "HPEDiskCount": 0 }, { "HPELogicalDisks": { "Status": null, "RaidLevel": null, "ID": null, "Capacity": null } }, { "HPEPhysicalDisks": [ ] }, { "Mig": { "VirtualMachineMigrationEnabled": true, "VirtualMachineMigrationPerformanceOption": 2, "VirtualMachineMigrationAuthenticationType": 1, "MaximumVirtualMachineMigrations": 2, "MaximumStorageMigrations": 2 } },
Hello there, Is there a way to address all fields case insensitively. To illustrate my point I have this query, index=*aws_config* resourceType="AWS::EC2::Volume" | eval tag_CostCenter=If(i... See more...
Hello there, Is there a way to address all fields case insensitively. To illustrate my point I have this query, index=*aws_config* resourceType="AWS::EC2::Volume" | eval tag_CostCenter=If(isnotnull('tags.Brand.CostCenter') OR isnotnull('tags.brand.costcenter') OR isnotnull('tags.brand.Costcenter' OR isnotnull('tags.brand.costCenter' OR isnotnull('tags.brand.COSTCENTER' OR isnotnull('tags.brand.costCENTER'), "Yes", "No") My data can have fields CostCenter, costCenter, COSTCENTER and many other case variations (And there can be tens of variations). Currently I am handling them by separating each variation with an OR. Is there a way to collectively query on all such case variations of a a field name instead of using multiple OR clauses. I know we can use coalesce or field aliases but that still means that I need to specify all possible field names somewhere. Thanks.
Hello, I have several alerts running on minute base and would like to know within the SPL of the currently running alert what is the corresponding alert name. My tries look as follows: | res... See more...
Hello, I have several alerts running on minute base and would like to know within the SPL of the currently running alert what is the corresponding alert name. My tries look as follows: | rest /services/search/jobs | rex field=id "(?<jobId>[^//]*)$" | addinfo | where jobId = info_sid but in the jobs, there is no way to get the name of the alert or at least i could not find it. I tried also to scan the saved searches with the similar rest call, but there it is not possible to match it with sid. The title (alert name) is there though. Matching with the time, like now() using the rest to saved searches I would like to avoid, it has potential to be erroneous. Now, am I missing something? Could you please advice? Kind Regards, Kamil
Hi All, Is Enterprise Security Content Update working on Enterprise without ES? Is It free of charge?
Hello, I would like to know how to find searchs that do not succeed (no results or with errors) ? Some users complains about the fact that searchs does not give any results, and I would like to f... See more...
Hello, I would like to know how to find searchs that do not succeed (no results or with errors) ? Some users complains about the fact that searchs does not give any results, and I would like to find if there is a search to find these errors (are there any errorcode to find, or else ?) Thank you
I am setting up connection for the Splunk instance and Splunk Cloud Gateway via a proxy. Proxy server is basically set to allow all , Splunk Cloud Gateway is installed on search head. According t... See more...
I am setting up connection for the Splunk instance and Splunk Cloud Gateway via a proxy. Proxy server is basically set to allow all , Splunk Cloud Gateway is installed on search head. According to the guideline in Troubleshoot Splunk Cloud Gateway Connection Issues, for https connection, it can return "healthy":true curl https://prod.spacebridge.spl.mobi/health_check -x http://my_proxy_svr:8000 for wss connection, it can return 200 and 101 Switching Protocols curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: prod.spacebridge.spl.mobi" -H "Origin: https://prod.spacebridge.spl.mobi"; -H "Authorization: xyz" https://prod.spacebridge.spl.mobi/mobile -x http://my_proxy_svr:8000 However, the status shown in Cloud Gateway Status Dashboard keep showing "Not Connected". For loggings in search, once restarting splunkd, the proxies config is read into program at first but become empty after first "handshake failed" occurred. May I know if there is any clue why cloudgateway is not connected? Thank you.