All Topics

Top

All Topics

Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" ... See more...
Hi, I'm trying to use match-pattern - regex inside the app-agent-config.xml in our java microservice, but it does not work properly. E.g.: <sensitive-url-filter delimiter="/" segment="3,4,5,6" match-filter="REGEX" match-pattern=":" param-pattern="myParam|myAnotherParam"/> this should mask selected segments that contains : but it masks everything. If I do match-pattern="=" it works as expected (masking segment that contains "=" in the string) Another examples that do not work (they mask everything): match-pattern=":" match-pattern="\x3A" (3A is ":" in ASCII table) match-pattern="[^a-z¦-]+" (should return true if there is anything other than lower letters and "-") match-pattern=":|=" Thank you Best regards, Alex Oliveira
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These ar... See more...
I'm setting my IDP service with SAML, SSO(Single-sign on). In Documentation, they say splunk cloud provides JIT(Just-In Time) provisioning, but I can't find JIT provisioning section.   These are what I reffered pages. https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateIdP#Just-in-time_provisioning_to_join_users_to_your_tenant_automatically https://docs.splunk.com/Documentation/SCS/current/Admin/IntegrateAzure   I'm using free trial now. Can this be problem? Does the JIT provisioning need any other plans? Or, am I not good at finding where the JIT provisioning button?   Please answer me. Thank you.
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advanc... See more...
Hi,  I would like to send a report  via Splunk automatically on the last day of each month.  In this case, I am afraid that I need to use cron schedule. Dose anyone have an idea? Thanks in advance! Tong  
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are ... See more...
Considering 2022-06 as starting month,  If month is 2022-07, i should assign 2022-06's corresponding field values " greater_6_mon" to 2022-07's field "prev" , likewise to 2022-08 as well Here are my values : month            prev          greater_6_mon 2022-06                                    26 2022-07                                      2 2022-08                                      1 expected result: (please suggest) month            prev      greater_6_mon 2022-06            0             26 2022-07           26            2 2022-08            2              1
reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | e... See more...
reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h") | eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2) | fields - maxtime per_source* avg* stdev*  
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the ... See more...
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the above error in the forwarder log and ingestion is not happening. using Splunk version is 8.0.2 modified outputs.conf but still have the same error
What are the various techniques for boarding data?
It must run automatically After downloading Right? But it did not appear the login page. Like this. How I get it.
How will we be able to determine which of our 10,000 forwarders is down?
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : ... See more...
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : "A", "Name2" : "B", "Name3" : "C", ... "Name10" : "J", "Var1" : 10, "Var2" : 10, "Var3" : 25, ... "Var10" : 50 I would like the search data to be transformed into a table formatted like this, internalizing the field names Name*, Var* and replacing the column headers with new names as shown below. Station Value A 10 B 10 C 25 ... ... J 50 How can I do this? Thanks
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the... See more...
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the crash time. But the Machine agent is sending all the Infra metrics to the controller.  I tried with heap increment for xmX & smX values and tried with metric registration limit to maximum level but these options are not resolving the issue. However, once I restarted the Machine agent service, the URL monitoring extension could start reporting it's metrics. This process is being repeated for 5 to 6 times per day. Can someone please help me. Thanks in advance! Avinash
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ ... See more...
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ unique values.   What is that exact 100+ number ?  If i hover my mouse on the field, it shows Top 10 values etc. but not the total count.  Things i have tried as per other posts in the forum"     index=aws sourcetype="aws:cloudtrail" | fields aws_account_id | stats dc(count) by aws_account_id       This does show me the total count (which is 156) but it shows like this:   Instead i want the data in this tabular format: Fieldname Count aws_account_id 156   Thanks in advance
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |... See more...
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |Search validdays<60 |table hostname,validddays,issuer,commonName My custom trigger condition is: search validdays="*" AND count<273   When I run this I am seeing results but no alert is triggered nor do I receive any email. please assist
Introduction This blog post is part of an ongoing series on OpenTelemetry. This guide is addressed to developers, product managers, and enthusiasts worldwide looking to contribute to the OpenTele... See more...
Introduction This blog post is part of an ongoing series on OpenTelemetry. This guide is addressed to developers, product managers, and enthusiasts worldwide looking to contribute to the OpenTelemetry project. OpenTelemetry is a vast project with the aim of standardizing and promoting observability best practices around metrics, traces, and logs. OpenTelemetry defines a model to represent metrics, traces, and logs. It defines well-known semantic conventions around metric names, dimensions, units, and properties. The project implements the common model into a protobuf model. This model generates code in SDKS associated with different languages. The project also implements a collector which is able to integrate with various technologies, receiving or sending data. The Hero’s Journey As a hopeful contributor to the OpenTelemetry project, you will want to offer changes to a project such as the collector or a specific SDK. Track your work with an issue Your first stop is to check the existing issues on the projects and check if someone has already started working on this. Please create a new issue right away to talk about your work and expose it to the community. It takes time for the community to gel with new ideas - you should be prepared to talk about your idea and your design. If you require help, you can attend a SIG meeting to discuss your proposal. You can also ask questions on the CNCF slack.  When opening an issue, you will be offered a template to help guide you through the steps of qualifying your issue. If you are proposing a new component, you may need a maintainer to sponsor your addition. If you stop there - that is already helping the project move forward! We thank you for your contribution. Shift Left Your Work Devopedia. 2022. "Shift Left." Version 6, February 15. Accessed 2022-06-15. https://devopedia.org/shift-left   The OpenTelemetry project is a bit different from most open source projects as it is not just software - it also defines a specification. As you start working on the project, you will notice that you have to make design choices, such as naming metrics, or making configuration choices and choosing valid defaults. Naming is hard - finding consensus on naming is harder. As you progress, you should open a new issue under the OpenTelemetry specification repository and tie it to your original issue to provide context to maintainers. The specification contains information about SDK environment variables, semantic conventions, sampling, instrumentation, and data serialization methods. Please provide a pull request with your changes, and attend one of the OpenTelemetry Specification SIG meetings to discuss them. Once those changes are merged, you have done the hardest part of contributing to OpenTelemetry, and the design choices that you have made will make the next step easy. However, if you are not a developer, this is already a transformative effort. Thank you for your help! Cultivate the law of Least Surprise   Now armed with a plan and having built consensus around your changes, you are well positioned to implement your changes. A good rule of thumb is to be as least surprising as possible when it’s time to review your code. You should, of course, have great test coverage, and make sure the build passes and that documentation is up to date. You should also prefer small changes instead of a large pull request. It’s much easier to say yes to three small pull requests than a large one. When you open a new pull request against the repository, it will be automatically assigned to a maintainer based on the paths you touch in your pull request. You should leave your pull request as a draft while it is still being developed. An Example I took on the task of adding a new metric to host metrics, the number of threads of a running process as exposed to the hostmetrics receiver. First, I looked for an open issue and found that someone had already filed an issue for this use case. Citing this issue, I created a pull request specifically for this new metric. I joined the specification SIG call to discuss the merits of my new addition: 2022-08-09 meeting I also discussed this addition on the ticket. The specification SIG met one more time to discuss this new metric. Once the specification’s pull request was merged, I could concentrate on the changes to the collector. I changed the hostmetrics receiver to optionally report thread count. I made sure to add this information to the changelog using a yaml file under the unreleased folder at the root of the repository. With this pull request merged, I could start using this new metric after the next release, 0.59. — Antoine Toulme,  Senior Engineering Manager, Blockchain & DLT
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%... See more...
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%20%20%20%3C/content%3E%0A%20%3C/entry%3E-,authentication/users,-https%3A//%3Chost%3E%3A%3CmPort However I'm using a custom role who just have the following capabilities:   * admin_all_objects * rest_access_server_endpoints * rest_apps_management * rest_apps_view * rest_properties_get *edit_user *search The user is unable to pull all users. My assumption is that as this users does not inheritance any other role then it is not able to list all users, as per the grantableRoles. If I'm right, what chance do I have for this user to pull all users with the rest API? or what capabilities I'm missing? Thanks in advance,  
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully p... See more...
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully push events from their Azure Tenant to their Splunk deployment. If so, I'm ultimately curious what designs / architecture patterns you utilized when deploying and segmenting out your Azure Event Hub Namespaces, and Event Hubs.  Reading over the README in the repo leads me to believe that you can get away with dumping all of the events generated within your tenant into a single event hub namespace / event hub, assuming you stay within the performance limitations imposed by the event hub. I don't particularly like this model as I believe it makes troubleshooting ingestion / data issues a bit of a pain since all of your data, regardless of source, or event type is in a single centralized location. so I would like to have a bit more organization than that.  I'm slowly working on a rough draft of how I think I want to break out my Event Hub Namespaces / Event Hubs but right now I'm not sure if I'm going to make my life, or my development team's life's harder as they will have to interface with this design via Terraform as we continue implementing infrastructure as code in our platform.   My initial breakout looks something like: - A unique subscription per AZ region we are deployed in, dedicated to logging infrastructure that will contain the Event Hub Namespaces, and corresponding function applications that push events out to Splunk...etc. All infrastructure that exists within a specified region will send their Diagnostic Logging Events (Platform logs / Resource logs) into the logging subscription. - A EH Namespace for SQL Servers, with EH's broken out per event type generated by the SQL Servers - An EH Namespace for Keyvaults, with EH's broken out per event type generated by Keyvaults - An EH namespace for Storage Accounts, with EH's broken out per event type generated by the storage accounts - An EH namespace for Global Microsoft Services (Azure Active Directory, Microsoft Defender, Sentinel...etc) - An EH namespace for Azure PaaS / IaaS offerings (Databricks, Azure Data Factory, Cognitive Search...etc) - An EH namespace for networking events (NAT Gateways, Firewalls, Public IPs, APIM, Frontdoor, WAF...etc)   so on and so forth.   Anyone willing to lend their insight?        
This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/... See more...
This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/26 | stats count by src_ip, dest_ip | appendpipe [| stats sum(count) as count by src_ip |eval keep=1 | eventstats sum(count) as total_log_count ] | appendpipe [| stats sum(count) as count by dest_ip |eval keep=1 | eventstats sum(count) as total_log_count ] |where keep=1| sort -count | head 20 | where total_log_count > 1000000 Below example outputs received, separate instances: src_ip dest_ip count keep total_log_count   192.168.14.11 39164 1 1008943 192.168.14.11   32239 1 1008943 10.80.0.243   31880 1 1008943   143.251.111.100 30773 1 1008943   156.33.250.10 15544 1 1008943 192.242.214.186   13793 1 1008943 172.253.63.188   12359 1 1008943   192.168.5.46 12346 1 1008943 192.168.10.146   10987 1 1008943   192.168.3.19 9079 1 1008943 192.168.3.195   8970 1 1008943 192.168.3.18   8074 1 1008943 172.18.3.42   7709 1 1008943   192.168.14.23 7647 1 1008943 192.168.5.46   7583 1 1008943   172.253.63.188 6549 1 1008943 172.33.250.10   5806 1 1008943   192.168.24.65 5654 1 1008943   172.253.115.188 5494 1 1008943   192.168.24.134 4388 1 1008943   src_ip dest_ip count keep total_log_count 87.114.132.220   45441 1 1005417   192.168.35.6 39597 1 1005417 192.168.14.15   31629 1 1005417   172.30.5.9 16348 1 1005417 10.80.0.243   15444 1 1005417 196.199.95.18   13883 1 1005417   172.253.62.139 12703 1 1005417   192.168.12.45 11957 1 1005417   172.253.115.188 10010 1 1005417 192.168.3.19   9676 1 1005417   192.168.35.16 9641 1 1005417 192.168.5.146   9290 1 1005417 192.168.25.46   7440 1 1005417 172.253.115.188   7292 1 1005417   192.168.3.18 6163 1 1005417 192.168.39.18   6063 1 1005417 176.155.19.207   5818 1 1005417   4.188.95.188 4947 1 1005417   5.201.73.253 4942 1 1005417   45.225.238.30 4938 1 1005417   Is there a way to modify the query such that it only triggers if there is a single entity causing logs greater than a certain number (e.g. 50000) in combination with the total logs also being over a certain threshold? There is still a desire to see an output reporting the top 20 IPs.  Your time, consideration and helpful suggestions is appreciated. Thank you.
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages... See more...
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing transforms.conf [setnull] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w+\n DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w{5}\d{4}\S\i.ab2.jone.com\s.+\n DEST_KEY = queue FORMAT = indexQueue the regex not sending unwanted event in /var/log/message .  I am doing the on HF before UF.  
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" ... See more...
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" | rex field=_raw "HTTP\/1\.1\ (?P<Status_Code>[^\ ]*)\s*(?P<Size>\d+)\s*(?P<Speed>\d+)" | eval MB=Size/1024/1024 | eval SecTM=Speed/1000 | eval Examplefield=case(SecTM<=1.00, "90%")| stats count by Examplefield | table count I can get the single value over 7 days I want to be able to do like a comparaison over the previous 7 days So lets number is 100,000 and prevous week was 90,000 then it shows up 10,000 or vice versa if that makes sense. I have seen the Sample Dashboard with Single Value with an arrow going up or down but I just have no clue how to syntax the time bit
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries... See more...
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries that have a sum(diff) > 1? My attempt looks like: [search] | stats sum(diff) by X_Request_ID as FinalDiff |where FinalDiff>1   My issue is that after the group by happens, the query seems to forget about the grouped sum and so I cannot compare it to 1.