All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some res... See more...
Hi, I have a weird problem with some data that is gone after some days but not in a summary index based on the first. I'll explain myself. I have an index in which I use this data to get some results. With my query in a 24h range of time. I created a summary index with almost the same query (saved search) to show similar info on another dashboard (Historic). If you launch both queries (the original and the saved search for the summary index) you get the same number of events but, when I run a search with the summary index, some dates, I find that results with the query launched against the summary index does not fit with the other index. Let's say I get 50 events with the original index and 60 from the summary index. How can that be?? I've been told TRANSACTION command can generate some troubles in using it sometimes. Is this true? I use this command in the original index and in the saved search to feed the summary index, not the one is run to show info based on the summary index.   Thanks. Regards,
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into al... See more...
I have an app with my alerts. I have risk enabled and it's working however risk isn't showing up in the Edit Correlation Search menu. Is there a setting in a .conf file I am missing? I looked into alert_actions.conf but don't see any other rule with that linking to it. Below is my risk setting for one of my rules: action.risk = 1 action.risk.param._risk = [{"risk_object_field": "dest", "risk_object_type": "system", "risk_score": 14}] action.risk.param._risk_message = Wsmprovhost.exe spawned a LOLBAS process on $dest$. action.risk.param._risk_score = 0 action.risk.param.verbose = 0
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure ... See more...
It is possible to access the script/resource files located with a Splunk app's bin or default directory from the Splunk Rest API or Java SDK? I can get the path to the app, however I cannot figure out a way to access the files within the app itself. Is this possible in Splunk?
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:... See more...
Hi everyone, From dbxquery, I retrieve this table: id start_time1 end_time1 start_time2 end_time2 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 21:46:32.0 16/09/2022 09:27:41.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 14/09/2022 24:52:03.0 15/09/2022 10:15:56.0 1234 13/09/2022 21:46:43.0 16/09/2022 12:10:35.414809 15/09/2022 10:30:14.0 15/09/2022 10:47:26.0 I want to find the start_time2 that closest to the start_time1, means the 2nd line. How can I do please?   Thanks, Julia
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I h... See more...
I am trying to an eval with like to assign priority to certain IPs/hosts and running into an issue where the priority is not being assigned. I am using network data to create my ES asset list and I have a lookup that does IP to cidr range and then returns the zone the IP is associated with. Later in my search I rename zone to bunit and right after that I am testing the eval as follows: | eval priority=if(like(bunit,"%foo%"), "critical" , "TBD") As I am testing the search at the end of my search I have: | table ip, mac, nt_host, dns, owner, priority, lat, long, city, country, bunit, category, pci_domain, is_expected, should_timesync, should_update, requires_av, device, interface | search bunit=*foo* I get a list of all foo related bunit events, but the priority field is set to "TBD"   Would appreciate any help - thx
Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" ... See more...
Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" segment="3,4,5,6" match-filter="REGEX" match-pattern=":" param-pattern="myParam|myAnotherParam"/> this should mask selected segments that contains : but it masks everything. If I do match-pattern="=" it works as expected (masking segment that contains "=" in the string) Another examples that do not work (they mask everything): match-pattern=":" match-pattern="\x3A" (3A is ":" in ASCII table) match-pattern="[^a-z¦-]+" (should return true if there is anything other than lower letters and "-") match-pattern=":|=" Thank you Best regards, Alex Oliveira
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These ar... See more...
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These are what I reffered pages. https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateIdP#Just-in-time_provisioning_to_join_users_to_your_tenant_automatically https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateAzure   I'm using free trial now. Can this be problem? Does the JIT provisioning need any other plans? Or, am I not good at finding where the JIT provisioning button?   Please answer me. Thank you.
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advanc... See more...
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advance! Tong  
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are ... See more...
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are my values : month            prev          greater_6_mon 2022-06                                    26 2022-07                                      2 2022-08                                      1 expected result: (please suggest) month            prev      greater_6_mon 2022-06            0             26 2022-07           26            2 2022-08            2              1
reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | e... See more...
reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h") | eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2) | fields - maxtime per_source* avg* stdev*  
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the ... See more...
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the above error in the forwarder log and ingestion is not happening. using Splunk version is 8.0.2 modified outputs.conf but still have the same error
What are the various techniques for boarding data?
It must run automatically After downloading Right? But it did not appear the login page. Like this. How I get it.
How will we be able to determine which of our 10,000 forwarders is down?
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : ... See more...
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : "A", "Name2" : "B", "Name3" : "C", ... "Name10" : "J", "Var1" : 10, "Var2" : 10, "Var3" : 25, ... "Var10" : 50 I would like the search data to be transformed into a table formatted like this, internalizing the field names Name*, Var* and replacing the column headers with new names as shown below. Station Value A 10 B 10 C 25 ... ... J 50 How can I do this? Thanks
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the... See more...
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the crash time. But the Machine agent is sending all the Infra metrics to the controller.  I tried with heap increment for xmX & smX values and tried with metric registration limit to maximum level but these options are not resolving the issue. However, once I restarted the Machine agent service, the URL monitoring extension could start reporting it's metrics. This process is being repeated for 5 to 6 times per day. Can someone please help me. Thanks in advance! Avinash
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ ... See more...
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ unique values.   What is that exact 100+ number ?  If i hover my mouse on the field, it shows Top 10 values etc. but not the total count.  Things i have tried as per other posts in the forum"     index=aws sourcetype="aws:cloudtrail" | fields aws_account_id | stats dc(count) by aws_account_id       This does show me the total count (which is 156) but it shows like this:   Instead i want the data in this tabular format: Fieldname Count aws_account_id 156   Thanks in advance
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |... See more...
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |Search validdays<60 |table hostname,validddays,issuer,commonName My custom trigger condition is: search validdays="*" AND count<273   When I run this I am seeing results but no alert is triggered nor do I receive any email. please assist
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%... See more...
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%20%20%20%3C/content%3E%0A%20%3C/entry%3E-,authentication/users,-https%3A//%3Chost%3E%3A%3CmPort However I'm using a custom role who just have the following capabilities:   * admin_all_objects * rest_access_server_endpoints * rest_apps_management * rest_apps_view * rest_properties_get *edit_user *search The user is unable to pull all users. My assumption is that as this users does not inheritance any other role then it is not able to list all users, as per the grantableRoles. If I'm right, what chance do I have for this user to pull all users with the rest API? or what capabilities I'm missing? Thanks in advance,  
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully p... See more...
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully push events from their Azure Tenant to their Splunk deployment. If so, I'm ultimately curious what designs / architecture patterns you utilized when deploying and segmenting out your Azure Event Hub Namespaces, and Event Hubs.  Reading over the README in the repo leads me to believe that you can get away with dumping all of the events generated within your tenant into a single event hub namespace / event hub, assuming you stay within the performance limitations imposed by the event hub. I don't particularly like this model as I believe it makes troubleshooting ingestion / data issues a bit of a pain since all of your data, regardless of source, or event type is in a single centralized location. so I would like to have a bit more organization than that.  I'm slowly working on a rough draft of how I think I want to break out my Event Hub Namespaces / Event Hubs but right now I'm not sure if I'm going to make my life, or my development team's life's harder as they will have to interface with this design via Terraform as we continue implementing infrastructure as code in our platform.   My initial breakout looks something like: - A unique subscription per AZ region we are deployed in, dedicated to logging infrastructure that will contain the Event Hub Namespaces, and corresponding function applications that push events out to Splunk...etc. All infrastructure that exists within a specified region will send their Diagnostic Logging Events (Platform logs / Resource logs) into the logging subscription. - A EH Namespace for SQL Servers, with EH's broken out per event type generated by the SQL Servers - An EH Namespace for Keyvaults, with EH's broken out per event type generated by Keyvaults - An EH namespace for Storage Accounts, with EH's broken out per event type generated by the storage accounts - An EH namespace for Global Microsoft Services (Azure Active Directory, Microsoft Defender, Sentinel...etc) - An EH namespace for Azure PaaS / IaaS offerings (Databricks, Azure Data Factory, Cognitive Search...etc) - An EH namespace for networking events (NAT Gateways, Firewalls, Public IPs, APIM, Frontdoor, WAF...etc)   so on and so forth.   Anyone willing to lend their insight?