All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shc... See more...
I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shcluster has ES installed, one doesn't. Everything seems to be working relatively OK. I have a "temporary" index into which I ingest some events from which I prepare a lookup by means of a report containing some search ending with | outputlookup. And that also works OK. Mostly. Because it used to work on an "old" shcluster (the one with ES). And it still does. But due to the fact that we have a new shcluster (the one without ES) and of course lookups are not shared between different shclusters, I defined a report on the new cluster as well. And here's where the fun starts. The report is defined and works great when run manually. But I cannot schedule it. I open the "Edit Schedule" dialog, i fill in all the necessary fields, I save the settings... and the dialog closes but nothing happens. If I open the "Edit Schedule" dialog again, the report is still not scheduled. To make things more interesting, I see entries in conf.log but they do show:      payload: { -        children: { -          action.email.show_password: { +          }          dispatch.earliest_time: { +          }          dispatch.latest_time: { +          }          schedule_window: { -            value: 15          }          search: { +          }        }       value: }  So there are _some_ schedule-related parameters (and yes - if I verify them in etc/users/admin/search/local/savedsearches.conf they are there) dispatch.earliest_time = -24h@h dispatch.latest_time = now schedule_window = 15  But there is no dispatch schedule being applied nor is the schedule enabled at all (the enableSched value is not pushed with the confOp apparently). So I'm stuck. I can of course manually edit the savedsearches.conf for my user but that's not the point. The version is 8.2.6.
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is ... See more...
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is monitored by the SPLUNK DB connect. There a millions of data movements each day.  But in the end of the day, we just need to work with the latest unique data that are in the system for each article. Each event has some 10-30 fields. What is the best way to getting rid of all the duplicates that are comming into the system ? Delete ? How ?  skip ? Lookup ? Summary DB ? How ?  What are the ideas that you might have or maybe some ideas i'm missing ?
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"ap... See more...
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"approvalCode":"YES","totalCash":"85000","passenger":"A",dgegrgrg4t3g4t3g4t3g4t,rgrfwefiuascjcusc, In this log i would like to have a extract as Cash and display the value in a tabular form as Date|Passenger|Amount  Please suggest.
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of ... See more...
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of our indexer. The Isilon version used is Isilon OneFS 9.2.1.7 Splunk Version: 8.0.4 2. Is this add-on compatible with Isilon version (9.2.1.7). As per splunkbase documentation for this add-on:     Are the below commands mandatory from Isilon side (As per splunkbase documentation):   Enabling audit on any of the Isilon storage may increase the resource utilization leading to performance degradation - so we are a but skeptical about it. 3. On set-up page of add-on in splunk, while giving the isilon cluster ip, username and password and clicking on save we were getting the error - "Error occured while authenticating to Server".   On checking the emc_isilon.log:     Changed the isilonappsetup.conf file with the below settings (verify = False) so to ensure if the above error is because of any certificate issue, then the certificates are not considered.. this was just for a quick testing to make a point on certificates: Can someone help on this? Thanks in advance
index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID=... See more...
index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID="CH" | xyseries TWIN_ID STATUS APPLIC |fillnull value="0" when i select TWIN_ID="CH" it is showing 3 counts but actuall count is 73.I think xyseries is removing duplicates can you please me on this my output is   TWIN_ID N VALUE Y CH DW_tz DW_l6 DW_1b cH 0 0 rs_rc ch 0 DW_dwscd DW_dwscd i also tried alternate with chart over  index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) | chart values(APPLIC) as APPLIC over TWIN_ID by STATUS |mvexpand N |fillnull value="0" MYOUTPUT Thank you in advance
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal accoun... See more...
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal account) using Java agent, I am getting 401 unauthorized error. Have verified creating API tokens as per the documentation but still getting same error.  Can you please help in suggesting way forward for this issue? Links referred: https://docs.splunk.com/Observability/gdi/get-data-in/application/java/instrumentation/instrument-java-application.html#configure-java-instrumentation (Referred topic "Send data directly to Observability Cloud") in above Link https://docs.splunk.com/Observability/gdi/get-data-in/application/java/troubleshooting/common-java-troubleshooting.html#common-java-troubleshooting https://docs.splunk.com/Observability/admin/authentication-tokens/api-access-tokens.html#admin-api-access-tokens Error: [otel.javaagent 2022-07-15 01:57:56:036 +0000] [BatchSpanProcessor_WorkerThread-1] WARN io.opentelemetry.exporter.jaeger.thrift.JaegerTh riftSpanExporter - Failed to export spans io.jaegertracing.internal.exceptions.SenderException: Could not send 72 spans, response 401: at io.jaegertracing.thrift.internal.senders.HttpSender.send(HttpSender.java:87) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.lambda$export$2(JaegerThriftSpanExporter.java:99) at java.util.HashMap.forEach(HashMap.java:1290) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.export(JaegerThriftSpanExporter.java:93) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.exportCurrentBatch(BatchSpanProcessor.java:326) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.run(BatchSpanProcessor.java:244) at java.lang.Thread.run(Thread.java:748)  
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can ... See more...
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can someone help with this? Key Part Measure_Type Result A 1 Type1 1 A 1 Type2 2 A 2 Type1 3 A 2 Type2 4 B 1 Type1 5 B 1 Type2 6 B 2 Type1 7
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search ... See more...
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search term, which uses the previous weeks results to get an average daily kb ingested for a particular index: index=_internal component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | stats sum(kb) as Usage by series | eval UsageGB=round(Usage/8/1024/1024,4) | eval daily_avg_usage=round(UsageGB/7,2) I thought that this was giving me a reasonable answer but then I started comparing it with values provided under License Usage report, with results split by index. By comparison the "per_index_thruput" for a particular index was providing a daily average of around 15GB, whereas the License Usage report provides an average for the same index of 56GB. This appears to be the case across all indexes measured. Whilst in this instance i can probably just use the results provided by the license usage report, i'd like to figure out why the above search is returning such a different answer to the license usage report (as this may impact on other dashboard searches that i have running). Thanks,
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extra... See more...
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extracting "SerialNumber" as a field from "PluginText" is frustrating. Any advice would be appreciated.
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available ... See more...
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available Memory" host="******" Available bytes instance="0" counter="Available bytes" | stats latest(Value) |
file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv wi... See more...
file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv with 2, generating a percentage of 50%.
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.Sun... See more...
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" I got stuck with this issue, please help me out.   Thanks
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue? ... See more...
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue?   Thanks
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the fil... See more...
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the files. The zipped files are unzipped to the Program FIles\var\run\splunk\upload folder where they will stay until ingested.       The problem starts when I interrupt the Splunk instance, either by restarting the server or restarting the instance manually using the cmd prompt i.e splunk restart. After restarting the service, the files don't seem to resume ingestion anymore.      How do I make sure that the UF resumes where it left off? Thanks.
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won'... See more...
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won't extract on Splunk. I've tried making little modifications but to no avail. Please help. Expression: source="ActiveDirectory" AND "CN=Domain Admins" AND member=* | rex field=_raw"(?<=CN=)[\w .]*(?=,)(?<admin>)/g"   The logs look similar to this: CN=Admin Account,OU=Vendor Accounts,OU=IT,DC=domain,DC=domain
Hi, Is it possible to monitor F5 load balancer ssl certificates using Splunk?   Thanks.
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk becaus... See more...
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk because our goal is to centralize logs and transition into Splunk as our SEM.  We want our firewall, update manager, anti malware, etc. to all have Logs in a centralized place. Will Splunk enterprise/cloud be able to centralize logs? If so, which of Splunk cloud or Splunk Enterprise would be better for the use case (SEM) I am after? Thanks!
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have... See more...
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have a reduction in our Mean Time To Detect (MTTD)?    2. How long is the maintenance window expected to be? This will help us determine the impact of the expected degradation over the course of the maintenance window. 3. Do we expect any impacts on the apps for the client 4. Is there any pre testing available for this upgrade.? 5.Is there any sort of back out plan?   please help me with these questions, I’m trying hard to get these Answers from docs but not getting any solutions.
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Co... See more...
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Com without modifying any searches. I can't delete and reindex, so that's right out. I found this post , which seems to be more or less what I want to do, but it isn't working, and I'm not sure why. It's older, so maybe the settings need to be different.  What is the easiest way to accomplish this goal? Cheers.
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at t... See more...
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at this point. I'm then using the universal forwarder to send this over to Splunk. The log file at the moment is around 150gb and growing. We've got plenty of space but I was wondering, is there a better way I should be approaching this? For example, should I break the logs up so that each Firewall has it's own directory and new sub directories per day?   Any insight would be appreciated.  Thanks, Will