All Topics

Top

All Topics

index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID=... See more...
index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) |search ODATE="2022-07-13" TWIN_ID="CH" | xyseries TWIN_ID STATUS APPLIC |fillnull value="0" when i select TWIN_ID="CH" it is showing 3 counts but actuall count is 73.I think xyseries is removing duplicates can you please me on this my output is   TWIN_ID N VALUE Y CH DW_tz DW_l6 DW_1b cH 0 0 rs_rc ch 0 DW_dwscd DW_dwscd i also tried alternate with chart over  index=a host="b" source="0*_R_S_C_ajf" OWNER=dw* |eval ODate=strptime(ODATE,"%Y%m%d") |eval ODATE=strftime(ODate,"%Y-%m-%d") | eval TWIN_ID=substr(JOBNAME,7,2) | chart values(APPLIC) as APPLIC over TWIN_ID by STATUS |mvexpand N |fillnull value="0" MYOUTPUT Thank you in advance
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal accoun... See more...
Hi Team, I have created a trial account in Splunk Observability as a cloud for checking traces and spans. While trying to send spans from an Java based application from EC2 instance(Personal account) using Java agent, I am getting 401 unauthorized error. Have verified creating API tokens as per the documentation but still getting same error.  Can you please help in suggesting way forward for this issue? Links referred: https://docs.splunk.com/Observability/gdi/get-data-in/application/java/instrumentation/instrument-java-application.html#configure-java-instrumentation (Referred topic "Send data directly to Observability Cloud") in above Link https://docs.splunk.com/Observability/gdi/get-data-in/application/java/troubleshooting/common-java-troubleshooting.html#common-java-troubleshooting https://docs.splunk.com/Observability/admin/authentication-tokens/api-access-tokens.html#admin-api-access-tokens Error: [otel.javaagent 2022-07-15 01:57:56:036 +0000] [BatchSpanProcessor_WorkerThread-1] WARN io.opentelemetry.exporter.jaeger.thrift.JaegerTh riftSpanExporter - Failed to export spans io.jaegertracing.internal.exceptions.SenderException: Could not send 72 spans, response 401: at io.jaegertracing.thrift.internal.senders.HttpSender.send(HttpSender.java:87) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.lambda$export$2(JaegerThriftSpanExporter.java:99) at java.util.HashMap.forEach(HashMap.java:1290) at io.opentelemetry.exporter.jaeger.thrift.JaegerThriftSpanExporter.export(JaegerThriftSpanExporter.java:93) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.exportCurrentBatch(BatchSpanProcessor.java:326) at io.opentelemetry.sdk.trace.export.BatchSpanProcessor$Worker.run(BatchSpanProcessor.java:244) at java.lang.Thread.run(Thread.java:748)  
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can ... See more...
I have the following data, I need a column chart that has two Trellis by Key, in each Trellis two columns by Type, and within each column, stacked by different parts.  I am new to visualization. Can someone help with this? Key Part Measure_Type Result A 1 Type1 1 A 1 Type2 2 A 2 Type1 3 A 2 Type2 4 B 1 Type1 5 B 1 Type2 6 B 2 Type1 7
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search ... See more...
HI all, I am trying to figure out the best method for determining the volume of logs ingested into my various indexes. From various community postings, I managed to put together the following search term, which uses the previous weeks results to get an average daily kb ingested for a particular index: index=_internal component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | stats sum(kb) as Usage by series | eval UsageGB=round(Usage/8/1024/1024,4) | eval daily_avg_usage=round(UsageGB/7,2) I thought that this was giving me a reasonable answer but then I started comparing it with values provided under License Usage report, with results split by index. By comparison the "per_index_thruput" for a particular index was providing a daily average of around 15GB, whereas the License Usage report provides an average for the same index of 56GB. This appears to be the case across all indexes measured. Whilst in this instance i can probably just use the results provided by the license usage report, i'd like to figure out why the above search is returning such a different answer to the license usage report (as this may impact on other dashboard searches that i have running). Thanks,
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extra... See more...
Within the tenable:sc:vuln sourcetype there is a particular field "PluginText" that has a value for hardware serial numbers. Overall I'm looking for any source type that provides that data, but extracting "SerialNumber" as a field from "PluginText" is frustrating. Any advice would be appreciated.
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available ... See more...
I try to view used memory in percentage. it doesn't matter of it is in bytes or mb or gb. As long as i can see the percentage it's ok.   For now i have this: index=main collection="Available Memory" host="******" Available bytes instance="0" counter="Available bytes" | stats latest(Value) |
file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv wi... See more...
file1.csv and file2.csv with a common field of "Tests". Wanting to compare File2 field "Tests" against file1.csv field "Tests" and generate a percentage. Example file1.csv has 4 test vs. file2.csv with 2, generating a percentage of 50%.
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.Sun... See more...
Hello, Does anyone ever faced the below issue when the source(using logstash) is trying to ingest logs to Splunk HF via HEC? :message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target" I got stuck with this issue, please help me out.   Thanks
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue? ... See more...
Hello, We are seeing the "splunk cloud is under maintenance" banner message when we are trying to access splunk dashboards developed on dashboard studio. Anyone ever experienced the same issue?   Thanks
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the fil... See more...
Hello,      I've got a data input where zipped evtx files are placed for ingestion on a server with the UF installed on it. The local inputs.conf file is modified to point to the folder with the files. The zipped files are unzipped to the Program FIles\var\run\splunk\upload folder where they will stay until ingested.       The problem starts when I interrupt the Splunk instance, either by restarting the server or restarting the instance manually using the cmd prompt i.e splunk restart. After restarting the service, the files don't seem to resume ingestion anymore.      How do I make sure that the UF resumes where it left off? Thanks.
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won'... See more...
I have been trying to extract a field to list domain admins from AD logs. The logs have all the admins starting with a CN= as shown in the expression. Despite working on regex101, the expression won't extract on Splunk. I've tried making little modifications but to no avail. Please help. Expression: source="ActiveDirectory" AND "CN=Domain Admins" AND member=* | rex field=_raw"(?<=CN=)[\w .]*(?=,)(?<admin>)/g"   The logs look similar to this: CN=Admin Account,OU=Vendor Accounts,OU=IT,DC=domain,DC=domain
Hi, Is it possible to monitor F5 load balancer ssl certificates using Splunk?   Thanks.
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk becaus... See more...
What are the big differences in usability from Splunk Cloud and Splunk Enterprise? We are a finance company with around 75 people. We currently use SolarWinds as our SEM. We looked into Splunk because our goal is to centralize logs and transition into Splunk as our SEM.  We want our firewall, update manager, anti malware, etc. to all have Logs in a centralized place. Will Splunk enterprise/cloud be able to centralize logs? If so, which of Splunk cloud or Splunk Enterprise would be better for the use case (SEM) I am after? Thanks!
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have... See more...
I have some doubts regarding the migration procedure of splunk cloud platform to Victoria.  1. During migration, intermittent inaccessibility to search heads can occur. Does this mean we may have a reduction in our Mean Time To Detect (MTTD)?    2. How long is the maintenance window expected to be? This will help us determine the impact of the expected degradation over the course of the maintenance window. 3. Do we expect any impacts on the apps for the client 4. Is there any pre testing available for this upgrade.? 5.Is there any sort of back out plan?   please help me with these questions, I’m trying hard to get these Answers from docs but not getting any solutions.
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Co... See more...
I have historical data in Splunk where the same host may appear as either Hostname.Domain.Com or Hostname. I would like all searches that specify Hostname to also gather events for Hostname.Domain.Com without modifying any searches. I can't delete and reindex, so that's right out. I found this post , which seems to be more or less what I want to do, but it isn't working, and I'm not sure why. It's older, so maybe the settings need to be different.  What is the easiest way to accomplish this goal? Cheers.
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at t... See more...
Hey all, I need some advice regarding our syslog storage facility. We're using rsyslog and at the moment we've got all firewall logs going into a single log file, which is getting pretty large at this point. I'm then using the universal forwarder to send this over to Splunk. The log file at the moment is around 150gb and growing. We've got plenty of space but I was wondering, is there a better way I should be approaching this? For example, should I break the logs up so that each Firewall has it's own directory and new sub directories per day?   Any insight would be appreciated.  Thanks, Will
index="*dockerlogs*" source="*gps-request-processor-dev*" OR source="*gps-external-processor-dev*" OR source="*gps-artifact-processor-dev*" | eval LabelType=coalesce(labelType, documentType) | stat... See more...
index="*dockerlogs*" source="*gps-request-processor-dev*" OR source="*gps-external-processor-dev*" OR source="*gps-artifact-processor-dev*" | eval LabelType=coalesce(labelType, documentType) | stats count(eval(status="Received" AND source like "%gps-request-processor%" )) as received count(eval(status="Failed")) as failed by LabelType LabelType               Received            Failed ----------                      --------                 ------ CARRIERLABEL       2                          2 NIKE                            39                        35 TASKSTART             1                           0 i want to transform above result into below table 1) where category can be 'external' or 'internal'       if labeltype is 'CARRIERLABEL' then category is 'external' else for other labeltype it should be 'internal' 2)  successcount = Received - failed category               successcount --------                    ------------- external                0 internal                 5
How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Gu... See more...
How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />   <EventID>4625</EventID>   <Version>0</Version>   <Level>0</Level>   <Task>12544</Task>   <Opcode>0</Opcode>   <Keywords>0x8010000000000000</Keywords>   <TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" />   <EventRecordID>67620</EventRecordID>   <Correlation />   <Execution ProcessID="552" ThreadID="4700" />   <Channel>Security</Channel>   <Computer>***</Computer>   <Security />   </System>   instead of this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">- <System><Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /><EventID>4625</EventID><Version>0</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords><TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" /><EventRecordID>67620</EventRecordID><Correlation /><Execution ProcessID="552" ThreadID="4700" />  <Channel>Security</Channel> <Computer>***</Computer><Security /> </System>
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 ho... See more...
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 hours to 8 hours (for near term forecast), of the more recent temperature forecast for the same sets of locations. If the recent forecast for the same location has dropped below the threshed 100 F, I need to issue an alert to cancel the previous alert. If a location's recent forecast is above 100 F, but the prior forecast was below 100 F (no alert had been issued), I need to issue a new alert for the location. Effectively, the query for near term forecast needs to access the query results of the long term query (or redo a query for the previous long term query), to compare with the recent forecast results. (I'm especially not clear how to compare two queries' results with Splunk query.) I wonder how to implement a solution with Splunk? Thanks for pointers! Let's build an example to develop the solution. Assume the operation time in question is 8:00 AM on July 14, 2022, so the 24 hour in advance long term forecast should have been made at 8:00 AM on July 13, 2022 (long term forecast)  The time window to make the short term forecast should be 0:00 AM (8-8) and 6:00 AM (8-2) (8 to 2 hours before) on the same day.  Here is more concise requirements: 1. Hourly, the forecasts of 24 hours after for all locations shall be collected and evaluated. If the 24-hour-after temperature will be over the threshold (100 F), alert shall be sent for the to-be-overheat locations. 2. Also hourly, the forecasts for the window of next 2 hours to the next 8 hours should be collected and evaluated. Based on the evaluation of the 2-hours-8-hours-after forecast, revision shall be made according to the following rules: a. If a location’s 2-hours-8-hours-after forecast is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s 2-hours-8-hours-after forecast is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed 3. At 15 minutes interval, the real time temperature for the locations shall be collected and evaluated. . Based on the evaluation of the real-time temperature, revision shall be made according to the following rules: a. If a location’s real time temperature is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s real time temperature is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down settin... See more...
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down setting, I am adding links to custom URL to each dashboards that will open unique Splunk search page. The problem is, I want to be able to change the hour to 4 hours instead of 24 from dashboard window, and when I click the bars on the grid for example, I would have Splunk search page that is 4 hours. And if I change the global time to 7 days I want to have search result with 7 days. I figured, I need to change the link with< &earliest=-4%40h&latest=now>  in the Link to Custom URL Drilldown settings. How I can relate the global_time to that timing?