All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to use the REST API Modular Input app, but I am getting this error: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: I... See more...
I am trying to use the REST API Modular Input app, but I am getting this error: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: Invalid header name 'X-APIKeys: accessKey' X-APIKeys: accessKey=blah;secretKey=blah Ideas on how to fix?
Hi Team, I am getting below error in spluk local insatance : Error details : Invalid key in stanza [tcp01] in C:\Program Files\Splunk\etc\app s\XYZ_test_local\default\index... See more...
Hi Team, I am getting below error in spluk local insatance : Error details : Invalid key in stanza [tcp01] in C:\Program Files\Splunk\etc\app s\XYZ_test_local\default\indexes.conf, line 5: maxTotalDatasize (value: 1024MB ). Invalid key in stanza [tcp01] in C:\Program Files\Splunk\etc\app s\XYZ_test_local\default\indexes.conf, line 7: maxWarmDBcount (value: 4). Your indexes and inputs configurations are not internally consistent tent. Indexes.conf : [tcp01] coldPath = $SPLUNK_DB/tcp01/colddb homePath = $SPLUNK_DB/tcp01/db thawedPath = $SPLUNK_DB/tcp01/thaweddb maxTotalDatasize = 1024MB maxHotSpanSecs = 243264 maxWarmDBcount = 4 maxHotbuckets = 3 disabled  = false [monitor://C:\Program Files\Splunk\XYZ_Alerts*] sourcetype=st01 index=tcp01 blacklist=.(gz|zip)$ initCrcLength=750
I need to create a new field called ip_address_location and for each IP address perform an if. So like this: if ip = "1.1.1." assign "site_abc" in ip_address_location if ip = "1.1.2." assign "si... See more...
I need to create a new field called ip_address_location and for each IP address perform an if. So like this: if ip = "1.1.1." assign "site_abc" in ip_address_location if ip = "1.1.2." assign "site_efg" in ip_address_location etc. Any suggestions?
Hello, we'd like to have multiple Opsgenie integrations in the same Splunk instance. The current integration only allows for a single installation and you can therefore only set a single API key. ... See more...
Hello, we'd like to have multiple Opsgenie integrations in the same Splunk instance. The current integration only allows for a single installation and you can therefore only set a single API key. It also comes with a predefined name for the integration. The use case that we have, is that we need to send alerts to different organisations in Opsgenie. We'd like an ability to differentiate, so when a user creates an alert, he/she can select from different Opsgenie integrations that then end up in one or more Opsgenie accounts. Thanks, Rens,Hello, currently the Splunk/Opsgenie integration app can only be installed once. It therefore also only allows a single API key to be provided, and it comes with a predefined name. We are looking for ways to have multiple Opsgenie integrations, that each can have a different API key and a different name. This would make it easier for people that create alerts to select a preconfigured Opsgenie integration that ends up on a different account. Is the code for the Opsgenie integration open source? In other words: can we open a PR that would include this functionality? Thanks, Rens van Leeuwen
Good afternoon    Is there splunk documentation where it is reported that in a SHC the servers must be certified at the hardware level?   I ask why, what would be the disadvantage if my captain... See more...
Good afternoon    Is there splunk documentation where it is reported that in a SHC the servers must be certified at the hardware level?   I ask why, what would be the disadvantage if my captain has 40 cores and 60gb RAM compared to the other search head servers that have 60 cores and 250GB ram?   We currently have a server that acts as a captain but he only performs AD-hoc queries and the user has almost no access to this machine, therefore he has no load and is only dedicated to making the bundle when he is captain of the cluseter.   Any information is welcome regards
Object: Object Server: Security Object Type: File Object Name: \Device\Har... See more...
Object: Object Server: Security Object Type: File Object Name: \Device\HarddiskVolume54\Tax\Confidential Handle ID: 0x1110 Resource Attributes: S:AI
Hello, I have a dashboard where I am displaying events which are JSON formatted (a requirement not to have them in raw format) and I need certain keywords to be highlighted. Since it is JSON forma... See more...
Hello, I have a dashboard where I am displaying events which are JSON formatted (a requirement not to have them in raw format) and I need certain keywords to be highlighted. Since it is JSON formatted, I cannot simply use Splunk's highlight function. Instead I have tried to use JavaScipt and CSS. I can get it to work using online editors like jsfiddle.net, where I just copy paste my .js and .css-files together with the html file I download from my dashboard. Everything works fine. However, when I upload the .js and .css-files to \Splunk\etc\apps\search\appserver\static, the highlighting works e.g. in the title to my panel, but it does not highlight the keywords in the JSON formatted events displayed which I also need. See image and code (Note: the image doesn't include the real JSON data that I'm going to be using it for later. The keyword 'gustav' is something that should be highlighted in the events but is not). Does anyone have any idea what is causing this or how it could be fixed? The code I'm using is the following: .js-file: function highlight(elem, keywords, cls = 'highlight') { const flags = 'gi'; // Sort longer matches first to avoid // highlighting keywords within keywords. keywords.sort((a, b) => b.length - a.length); Array.from(elem.childNodes).forEach(child => { const keywordRegex = RegExp(keywords.join('|'), flags); if (child.nodeType !== 3) { // not a text node highlight(child, keywords, cls); } else if (keywordRegex.test(child.textContent)) { const frag = document.createDocumentFragment(); let lastIdx = 0; child.textContent.replace(keywordRegex, (match, idx) => { const part = document.createTextNode(child.textContent.slice(lastIdx, idx)); const highlighted = document.createElement('span'); highlighted.textContent = match; highlighted.classList.add(cls); frag.appendChild(part); frag.appendChild(highlighted); lastIdx = idx + match.length; }); const end = document.createTextNode(child.textContent.slice(lastIdx)); frag.appendChild(end); child.parentNode.replaceChild(frag, child); } }); } var myElement = document.getElementById("events_highlighted"); // Used document.body instead of the value of myElement highlight(document.body, ['is', 'Robotics', 'Top', 'gustav', 'failed', 'success', 'info', 'error', 'event', 'res']); .css-file: .highlight { background: lightpink; }
Hello, I am extract information from logs via rex, and I am currently extra information in military time format. (i.e.: 13:15). I also extract things such as 11:15, but I want it to be consistent ... See more...
Hello, I am extract information from logs via rex, and I am currently extra information in military time format. (i.e.: 13:15). I also extract things such as 11:15, but I want it to be consistent in a 12 hour AM/PM format. Example: 1:15 PM instead of 13:15. 11:15 AM instead of 11:15. I was wondering if it were possible to convert the information I extract, if it is between 13:00 and 23:59, that would be PM. Here is my log: Here is my table currently. Here is my query so far: index=monitoring sourcetype=PEGA:WinEventLog:Application ( SourceName="RoboticLogging" OR SourceName="Application" ) ("Type=" "Information") | rex field=_raw "Department=\"(?<Department>.+?)\"" | where Department = "HRSS_NEO" OR Department = "HRSS Daily NEO Report" | rex "Duration:\s*(?<hh>\d+):(?<mm>\d+):(?<ss>\d+\.\d+)" | rex "Number of supervisor reminder memos sent:\s*(?<memo>[^,]+)" | rex "Number of New Employees in NEO Report with job title Temporary Agy Svc Asst:\s*(?<yes>[^,]+)" | rex "Number of New Employees in NEO Report without job title Temporary Agy Svc Asst:\s*(?<no>[^,]+)" | rex "Number of supervisors found when searching AD:\s*(?<valid>[^,]+)" | rex "UserID=\"UNTOPR\\\(?<UID>.+?)\"" | rex "Number of supervisors not found when searching AD:(?<invalid>[^,]+)" | rex "Email Received\s*Time:(?<received>.{5}?)" | rex "Email Process Started At:\s*(?<processed>.{5}?)" | eval processed = if(isnull(processed), "-", processed) | rex "StartTime:\s*(?<startTime>.{5})" | eval startTime = if(isnull(startTime), "-", startTime) | eval dur = round(((hh * 3600) + (mm * 60) + ss),0) | eval avghndl = round(dur/memo, 0) | eval dur = tostring(dur,"duration") | eval avghndl = tostring(avghndl,"duration") | eval Time = strftime(_time, "%m/%d/%Y at %r") | where dur != " " | eval valid = if(isnull(valid), "0", valid) | eval received = if(isnull(received), "-", received) | replace "" with "0" | eval strr = host." : ".UID | eval strr=upper(strr) | eval invalid = if(isnull(invalid), "0", invalid) | fields - _time | dedup Time | table strr, Time, dur, received, startTime, processed, memo, yes, no, valid, invalid, avghndl, | rename strr as "Workstation : User", dur as "Duration (HR:MIN:SEC)", memo as "Supervisor Reminder Memos Sent", yes as "New Temporary Employees", no as "New Employees (Not Temporary)", valid as "Valid Aliases", invalid as "Invalid Aliases", avghndl as "Average Handle Time per Email", received as "Email Received Time", startTime as "Start Time", processed as "Email Processed Time" | sort by Time desc
I see below errors in the search head cluster.can some one helps resolve the issue? 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - Replication connection to ip=10.164.196.166:8999 timed ... See more...
I see below errors in the search head cluster.can some one helps resolve the issue? 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - Replication connection to ip=10.164.196.166:8999 timed out 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - Connection failed 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - event=artifactReplicationFailed type=ReplicationFiles files="/opt/splunk/var/run/splunk/dispatch/splunktemps/send/s2s/schedulerpbasav_ZWVfc2VhcmNoX3NwbHVua19zdXBwb3J0_RMD59b3a79690728a412_at_1581429480_498_638683B3-25D9-4D2A-AF2E-4E43362FDBFA-644D578C-F001-4711-B459-2338E22DF399.tar" guid=644D578C-F001-4711-B459-2338E22DF399 host=xx.xx.xxx.166 s2sport=8999 aid=746. Connection failed we see some of the reports are generating without data and only some time..not sure what it is causing?
We have folder directories on the Application server and collecting data through forwarder. i need to calculate file size, last modified for certain files in different directories . can anyone h... See more...
We have folder directories on the Application server and collecting data through forwarder. i need to calculate file size, last modified for certain files in different directories . can anyone help me here . how to do it?
Hi, I'm looking at possibly integrating certain of my Splunk dashboards with Power Bi hopefully using a REST API. Has anyone had any success with this? Thanks
How to combine three fields in one field and display it as table? I need one field called emails consisting of from, to and user fields
I’m wondering how can I write simple sql command to join two table in sql editor on splunk. For e.g. when I run below query give me syntax error. SELECT * FROM "sysmaster":"sysadtinfo". "sysbufp... See more...
I’m wondering how can I write simple sql command to join two table in sql editor on splunk. For e.g. when I run below query give me syntax error. SELECT * FROM "sysmaster":"sysadtinfo". "sysbufpool" sysmaster: database name sysadtinfo: table1 sysbufpool: table2 Is this right syntax?
Hi All, I am using a script to fetch http response as splunk raw event. For this I am passing parameter as a variable, whose value is in another conf file. The inputs.conf is as below: [scri... See more...
Hi All, I am using a script to fetch http response as splunk raw event. For this I am passing parameter as a variable, whose value is in another conf file. The inputs.conf is as below: [script:///opt/splunkforwarder/etc/apps/search/bin/scripts/urlhealthcheck.sh HEALTHCHECK_URL ] sourcetype = healthcheck disabled = false interval = 300 index = main The configuration file where the parameter HEALTHCHECK_URL is stored;example.conf HEALTHCHECK_URL=https://healthcheckurl.domain.com The shell script urlhealthcheck.sh; #!/bin/sh url=$(cat PRA.conf | grep $1 | awk -F "=" '{print $2}') responsecode=$(wget -S --spider --no-check-certificate $url 2>&1 | grep "HTTP/" | awk '{print $2}') response=$(wget -q --no-check-certificate -O - $url 2>&1 ) echo "URL=$url, ResponseCode=$responsecode, Response=$response" This shell script is running perfectly when run from the terminal as sh /opt/splunkforwarder/etc/apps/search/bin/scripts/urlhealthcheck.sh HEALTHCHECK_URL Or run as ./opt/splunkforwarder/etc/apps/search/bin/scripts/urlhealthcheck.sh HEALTHCHECK_URL giving output as ; [ { "code": 200, "response": "Health Check: Succeeded" } ] But in inputs.conf this giving the response as ; wget: missing URL Usage: wget [OPTION]... [URL]... Try `wget --help' for more options. If I change the parameter from HEALTHCHECK_URL to https://healthcheckurl.domain.com as the http response is coming out correct without an error. [script:///opt/splunkforwarder/etc/apps/search/bin/scripts/urlhealthcheck.sh https://healthcheckurl.domain.com ] sourcetype = healthcheck disabled = false interval = 300 index = main What is the reason that I am not able to pass the parameter as a variable through the inputs.conf, though the script is working fine?
hi I use a search wich add a unit value at the end of the result (GB) | eval FreeSpace=FreeSpace." GB", TotalSpace=TotalSpace." GB" I need to use a threshold coloring on this value but i... See more...
hi I use a search wich add a unit value at the end of the result (GB) | eval FreeSpace=FreeSpace." GB", TotalSpace=TotalSpace." GB" I need to use a threshold coloring on this value but it doesnt works due to the unit value at the end... <colorPalette type="list">[#DC4E41,#F1813F,#53A051]</colorPalette> <scale type="threshold">10,80</scale> </format> what i have to do please??
I'm looking to send events from Splunk to ServiceNow using the add-on. The catch is, for security reasons, we may be required to push the data from Splunk to ServiceNow via a MID Server. Normal... See more...
I'm looking to send events from Splunk to ServiceNow using the add-on. The catch is, for security reasons, we may be required to push the data from Splunk to ServiceNow via a MID Server. Normal approach: Splunk -> ServiceNow Possible approach required for the client: Splunk -> MID Server -> ServiceNow Does the add-on support sending the event to the MID server at all? If not, what are the alternative options available?
Dear All, We have Deployment Server with around 1900+ clients reporting to it. Currently it is v7.0 and we are planning to upgrade it v7.3.3. The document says, disable deployment server and then... See more...
Dear All, We have Deployment Server with around 1900+ clients reporting to it. Currently it is v7.0 and we are planning to upgrade it v7.3.3. The document says, disable deployment server and then upgrade, but if I disable it what would be the behavior of clients? What would be safest way to upgrade deployment server without losing any data. Also, will Deployment Server (v7.3.3) work well with Indexer Cluster (v7.0) ? will I can potentially face any compatibility issues? Regards, Abhi
Hi All, Is it possible to get the Earliest available date of index and source type . I tried "Tstats" and "Metadata" but they depend on the search timerange. I need to get the earliest time t... See more...
Hi All, Is it possible to get the Earliest available date of index and source type . I tried "Tstats" and "Metadata" but they depend on the search timerange. I need to get the earliest time that i can still search on Splunk by index and sourcetype that doesn't use "ALLTIME". A good example would be, data that are 8months ago, without using too much resources. Just let me know if it's possible
Hi, I have the following log format, How can I break this multiline event on condition that "2020-01-23 03:50:49,063" arrives. Note that the log needs to be indexed with Local Time. //**... See more...
Hi, I have the following log format, How can I break this multiline event on condition that "2020-01-23 03:50:49,063" arrives. Note that the log needs to be indexed with Local Time. //****************************************************************************************************** // Module : teste 6.15.0001.77 // Local Time : 23/01/2020 03:50:48.985 (Daylight Saving Time=Off) // System Time (UTC) : 23/01/2020 06:50:48.985 // // Domain Name : itau.corp.ihf // // 32/64 Bit : 64 Bit // // Module Name, File Version, Modification Date: // ---------------------------------------------------------------------------------------------------- // teste.exe, 6.15.0001.77, 05/08/2019 19:58:36 // //****************************************************************************************************** 2020-01-23 03:50:49,063 | INFO | 4 | testeService.OnStart | | teste | testeService.OnStart: Log Client initialized successfully.  2020-01-23 03:50:49,094 | INFO | 4 | testeService.OnStart | | teste | testeService.OnStart: Trying to load teste modules...  2020-01-23 03:50:49,610 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Going to register WCF teste  2020-01-23 03:50:53,391 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Config file already defines ServiceModel configuration, for service teste. Trying to load updated configuration and combine (for Accessible mode only!)...  2020-01-23 03:50:53,485 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Finished writing updated ServiceModel configuration to config file, for service teste.  2020-01-23 03:50:53,813 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: << All WCF services succeeded to publish. took: 00:00:00.3281398  In this example, the log should be broken into 06 lines, considering the log "2020-01-23 03: 50: 49,063" as the beginning.
Hi all, Our environment consists of, amongst other things, a multisite (3) clustered environment. Each site has three indexers making a total of nine indexers. We also have a replication factor of... See more...
Hi all, Our environment consists of, amongst other things, a multisite (3) clustered environment. Each site has three indexers making a total of nine indexers. We also have a replication factor of 3. On each indexer the hot/warm and cold buckets are on separate filesystems. On one of the indexers, the filesystem containing the cold buckets suffered a hard disk failure which has destroyed the entire FS. My question is: when the disk/filesystem is repaired, will Splunk automatically rebuild the cold buckets from the replications? If it does, will it do it when I start Splunk or is there some maintenance commands that I will need to issue? Many thanks, Mark.