All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,  ServiceNow supports multiple ticket types such as "RITM", "SCTASK", "INCIDENT".  Our Splunk Cloud instance today can only create "INCIDENT" type tickets.  Very curious if Splunk SOAR can e... See more...
Hi All,  ServiceNow supports multiple ticket types such as "RITM", "SCTASK", "INCIDENT".  Our Splunk Cloud instance today can only create "INCIDENT" type tickets.  Very curious if Splunk SOAR can extend this functionality and let us create "SCTASK", which is our preferred task types in the ticketing system.  Thanks~!
Hi all, I have events coming in that have multivalue fields, but not always the same fields are multivalue. I want all the fields in the events resulting from a search to be concatenated to single ... See more...
Hi all, I have events coming in that have multivalue fields, but not always the same fields are multivalue. I want all the fields in the events resulting from a search to be concatenated to single value field. Example: Result now shows: dest       xyz                 fff Result should show: dest   xyz [delimiter] fff Just to be sure that everyone understand using dest here is an example it should be a query that I can run that would actually change every multivalue field regardless of field name. Cheers,
Hi Splunkers, I installed a private app with its own set of roles on splunk cloud instance using victoria experience but i couldnt find those roles in settings--> roles . My authorize.conf is fine,... See more...
Hi Splunkers, I installed a private app with its own set of roles on splunk cloud instance using victoria experience but i couldnt find those roles in settings--> roles . My authorize.conf is fine, if tried locally. Not sure what could be the possible reason. Is there any limitations on using app specific roles on cloud instance. Please suggest  
Hi folks,   I have a HF already sending data to one cloud instance, however I'd like to start sending data to a different cloud stack from the same HF.   Does anyone can give an example of th... See more...
Hi folks,   I have a HF already sending data to one cloud instance, however I'd like to start sending data to a different cloud stack from the same HF.   Does anyone can give an example of the configuration in outputs.conf? Should I configured it in local or default? Should I use different receiving ports for this configuration? If so, which one do you recommend? I appreciate your help. Thanks.
Hello, We are using Splunk v8.2.5 (Build:77015bc7a462 if this helps). Since we upgraded we no longer receive errors or warnings when stats, eventstats, or streamstats is not returning the correct... See more...
Hello, We are using Splunk v8.2.5 (Build:77015bc7a462 if this helps). Since we upgraded we no longer receive errors or warnings when stats, eventstats, or streamstats is not returning the correct values. We have a lookup csv of nearly 3 million records containing several fields that need to be counted and compared: prize code, address, email, etc. The eventstats command fails and there is no error or warning. However, the stats command works. This would be okay if we had only one field to be counted. We have 4-8 fields that must be counted and compared. Using the stats command quickly becomes a nightmare because every field that is not being counted in relation to the particular field in the by clause would need to be added using values(FIELDNAME). The eventstats command would be cleaner. Or would it?       | bin _time span=1d | eventstats count(prize_code) as count_prize_code by _time, address | dedup address, count_prize_code, _time | eventstats count(_time) as count_prize_code_dates, sum(count_prize_code) as sum_count_prize_code by address | dedup address, count_prize_code_dates, sum_count_prize_code | table address, count_prize_code_dates, sum_count_prize_code       Thanks and God bless, Genesius
Hello, I have distributed environment with IDX cluster and DS. DS is used for deploy config to IDX cluster Manager Node and from it to IDX cluster nodes then. It is working fine. I upgraded DS from... See more...
Hello, I have distributed environment with IDX cluster and DS. DS is used for deploy config to IDX cluster Manager Node and from it to IDX cluster nodes then. It is working fine. I upgraded DS from 8.1.6 to 8.1.10.1 (yes, because SVD-2022-0608...). Manager Node is on 8.1.6. After upgrade I noticed this log messages on MN: 10.88.28.93 - - [13/Jul/2022:15:56:33.540 +0200] "GET /services/server/info HTTP/1.1" 401 130 "-" "Splunk/8.1.10.1 (Linux 3.10.0-1160.62.1.el7.x86_64; arch=x86_64)" - 0ms  10.88.28.93 is IP address of DS I checked Search peers config on DS and there was MN in "sick" state. I edited its config by re-enter Remote username and Remote password and then MN changed status to Healthy and everything is working fine. My question is: what happened during upgrade of DS? My idea is that new pair of private+public keys was generated on DS on first run after upgrade (and then I had to distribute new public key to MN by re-entering Remote username and password of course), but am I right? And if I am right, why this happened? I made many Splunk upgrades before and I experienced this never before... Any info/hint/clue will be highly appreciated. Thank you. Best regards Lukas Mecir
I'm running Splunk Enterprise 8.2.5 with deployment server on Windows 2019. I'm deploying the Splunk-Addon for Unix app to my Linux estate. The app runs various .sh shell scripts to capture data and ... See more...
I'm running Splunk Enterprise 8.2.5 with deployment server on Windows 2019. I'm deploying the Splunk-Addon for Unix app to my Linux estate. The app runs various .sh shell scripts to capture data and ship back to the indexers. The problem is that these shell script have no execute permission when deployed. I have to run a script at the forwarder to add the execute bit for the Splunk user so that the UF can run them. This is fine as a once off but if we updated the deployment app it occurs again. Is there any way to handle this with Splunk itself? 
I have a data with two fields: User and Account Account is a field with multiple values. I am looking for a search that shows all the results where User is NOT matching any of the values in Account... See more...
I have a data with two fields: User and Account Account is a field with multiple values. I am looking for a search that shows all the results where User is NOT matching any of the values in Account. From the below mentioned sample data, the search should only give "Sample 1" as output Sample 1 User Account p12345 redfox   h12345   home\redfox   new@redfox.com   Sample 2 User Account L12345 redsox   L12345   sky\newid   sam@redsox.com   I have tried makemv, but not getting desired output
is it possible to change the log rotation timing for the internal logs that Universal Forwarder and Heavy Forwarder output to the OS. For example, splunkd.log. Currently, the logs are rotated by fi... See more...
is it possible to change the log rotation timing for the internal logs that Universal Forwarder and Heavy Forwarder output to the OS. For example, splunkd.log. Currently, the logs are rotated by file size, but can we rotate on a daily basis. is it possible ?
It's a bit off-topic but I have a kinda unusual use case. I want to get the events out of windows box and store it on a linux machine (in this particular case it's windows VM and I want to export the... See more...
It's a bit off-topic but I have a kinda unusual use case. I want to get the events out of windows box and store it on a linux machine (in this particular case it's windows VM and I want to export the events to the hypervisor). Of course for linux it's easiest to receive syslog messages but as we all know, Windows doesn't have built-in syslog server and you can't easily get the events with built-in windows tools to push through syslog channel. So far I've been using the free SolarWinds Event Log Forwarder but it has its flaws - most notably it has problems with starting automatically with the Windows machine. It ends up with the process started but it's not forwarding events unless I manually disable and re-enable the subscriptions. That's unacceptable. So I was thinking that maybe I should just install UF and instead of using splunk-tcp output just push events with plain tcp output to a syslog server. Anyone has experience with it? The upside to this is that I know that UF works relatively reliably and I wouldn't have to worry about it too much. The downside is that I would have to define a separate input for each event log channel (but I think I'd simply script it and have it run every few days to synchronise eventlog channels with inputs.conf). I could of course set up whole Splunk Free environment on my hypervisor but it would be a huuuuuge overkill. Any hints for the UF installation/configuration?
I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shc... See more...
I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shcluster has ES installed, one doesn't. Everything seems to be working relatively OK. I have a "temporary" index into which I ingest some events from which I prepare a lookup by means of a report containing some search ending with | outputlookup. And that also works OK. Mostly. Because it used to work on an "old" shcluster (the one with ES). And it still does. But due to the fact that we have a new shcluster (the one without ES) and of course lookups are not shared between different shclusters, I defined a report on the new cluster as well. And here's where the fun starts. The report is defined and works great when run manually. But I cannot schedule it. I open the "Edit Schedule" dialog, i fill in all the necessary fields, I save the settings... and the dialog closes but nothing happens. If I open the "Edit Schedule" dialog again, the report is still not scheduled. To make things more interesting, I see entries in conf.log but they do show:      payload: { -        children: { -          action.email.show_password: { +          }          dispatch.earliest_time: { +          }          dispatch.latest_time: { +          }          schedule_window: { -            value: 15          }          search: { +          }        }       value: }  So there are _some_ schedule-related parameters (and yes - if I verify them in etc/users/admin/search/local/savedsearches.conf they are there) dispatch.earliest_time = -24h@h dispatch.latest_time = now schedule_window = 15  But there is no dispatch schedule being applied nor is the schedule enabled at all (the enableSched value is not pushed with the confOp apparently). So I'm stuck. I can of course manually edit the savedsearches.conf for my user but that's not the point. The version is 8.2.6.
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is ... See more...
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is monitored by the SPLUNK DB connect. There a millions of data movements each day.  But in the end of the day, we just need to work with the latest unique data that are in the system for each article. Each event has some 10-30 fields. What is the best way to getting rid of all the duplicates that are comming into the system ? Delete ? How ?  skip ? Lookup ? Summary DB ? How ?  What are the ideas that you might have or maybe some ideas i'm missing ?
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"ap... See more...
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"approvalCode":"YES","totalCash":"85000","passenger":"A",dgegrgrg4t3g4t3g4t3g4t,rgrfwefiuascjcusc, In this log i would like to have a extract as Cash and display the value in a tabular form as Date|Passenger|Amount  Please suggest.
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of ... See more...
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of our indexer. The Isilon version used is Isilon OneFS 9.2.1.7 Splunk Version: 8.0.4 2. Is this add-on compatible with Isilon version (9.2.1.7). As per splunkbase documentation for this add-on:     Are the below commands mandatory from Isilon side (As per splunkbase documentation):   Enabling audit on any of the Isilon storage may increase the resource utilization leading to performance degradation - so we are a but skeptical about it. 3. On set-up page of add-on in splunk, while giving the isilon cluster ip, username and password and clicking on save we were getting the error - "Error occured while authenticating to Server".   On checking the emc_isilon.log:     Changed the isilonappsetup.conf file with the below settings (verify = False) so to ensure if the above error is because of any certificate issue, then the certificates are not considered.. this was just for a quick testing to make a point on certificates: Can someone help on this? Thanks in advance
index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID=... See more...
index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID="CH" | xyseries TWIN_ID STATUS APPLIC |fillnull value="0" when i select TWIN_ID="CH" it is showing 3 counts but actuall count is 73.I think xyseries is removing duplicates can you please me on this my output is   TWIN_ID N VALUE Y CH DW_tz DW_l6 DW_1b cH 0 0 rs_rc ch 0 DW_dwscd DW_dwscd i also tried alternate with chart over  index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) | chart values(APPLIC) as APPLIC over TWIN_ID by STATUS |mvexpand N |fillnull value="0" MYOUTPUT Thank you in advance
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal accoun... See more...
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal account) using Java agent, I am getting 401 unauthorized error. Have verified creating API tokens as per the documentation but still getting same error.  Can you please help in suggesting way forward for this issue? Links referred: https://docs.splunk.com/Observability/gdi/get-data-in/application/java/instrumentation/instrument-java-application.html#configure-java-instrumentation (Referred topic "Send data directly to Observability Cloud") in above Link https://docs.splunk.com/Observability/gdi/get-data-in/application/java/troubleshooting/common-java-troubleshooting.html#common-java-troubleshooting https://docs.splunk.com/Observability/admin/authentication-tokens/api-access-tokens.html#admin-api-access-tokens Error: [otel.javaagent 2022-07-15 01:57:56:036 +0000] [BatchSpanProcessor_WorkerThread-1] WARN io.opentelemetry.exporter.jaeger.thrift.JaegerTh riftSpanExporter - Failed to export spans io.jaegertracing.internal.exceptions.SenderException: Could not send 72 spans, response 401: at io.jaegertracing.thrift.internal.senders.HttpSender.send(HttpSender.java:87) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.lambda$export$2(JaegerThriftSpanExporter.java:99) at java.util.HashMap.forEach(HashMap.java:1290) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.export(JaegerThriftSpanExporter.java:93) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.exportCurrentBatch(BatchSpanProcessor.java:326) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.run(BatchSpanProcessor.java:244) at java.lang.Thread.run(Thread.java:748)  
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can ... See more...
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can someone help with this? Key Part Measure_Type Result A 1 Type1 1 A 1 Type2 2 A 2 Type1 3 A 2 Type2 4 B 1 Type1 5 B 1 Type2 6 B 2 Type1 7
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search ... See more...
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search term, which uses the previous weeks results to get an average daily kb ingested for a particular index: index=_internal component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | stats sum(kb) as Usage by series | eval UsageGB=round(Usage/8/1024/1024,4) | eval daily_avg_usage=round(UsageGB/7,2) I thought that this was giving me a reasonable answer but then I started comparing it with values provided under License Usage report, with results split by index. By comparison the "per_index_thruput" for a particular index was providing a daily average of around 15GB, whereas the License Usage report provides an average for the same index of 56GB. This appears to be the case across all indexes measured. Whilst in this instance i can probably just use the results provided by the license usage report, i'd like to figure out why the above search is returning such a different answer to the license usage report (as this may impact on other dashboard searches that i have running). Thanks,
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extra... See more...
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extracting "SerialNumber" as a field from "PluginText" is frustrating. Any advice would be appreciated.
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available ... See more...
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available Memory" host="******" Available bytes instance="0" counter="Available bytes" | stats latest(Value) |