All Topics

Top

All Topics

Is it possible to perform custom attribute mapping when syncing user attributes using SAML2 authentication? I know we can map external attributes to first_name, last_name, etc. But we have a need to ... See more...
Is it possible to perform custom attribute mapping when syncing user attributes using SAML2 authentication? I know we can map external attributes to first_name, last_name, etc. But we have a need to set first_name to a nickname attribute if it exists or has a value and if not fallback to the firstname attribute. The configuration doesn't allow us to map two external attributes to the same SOAR user attribute. Wasn't sure if there was a way to script this somewhere or if we are stuck performing this mapping on the IdP?
I have created an addon with a few input parameters. One of them is a dropdown list box. I am seeing that when I add a data input from within the app created by the addon automatically the dropdown s... See more...
I have created an addon with a few input parameters. One of them is a dropdown list box. I am seeing that when I add a data input from within the app created by the addon automatically the dropdown shows fine and I can select an item from it. However, when I create the same data input from the Settings->Data Inputs menu item, the dropdown list box is shown as a textbox. Any ideas on what I might be doing wrong? Thanks in advance.
In the data, there is an array of 5 commit IDs. For some reason, it is only returning 3 values. Not sure why  2 values are missing. Would like a fresh set of eyes to take a look please. Query ind... See more...
In the data, there is an array of 5 commit IDs. For some reason, it is only returning 3 values. Not sure why  2 values are missing. Would like a fresh set of eyes to take a look please. Query index=XXXXX source="http:github-dev-token" eventtype="GitHub::Push" sourcetype="json_ae_git-webhook" | spath output=commit_id path=commits.id sourcetype definition [ json_ae_git-webhook ] AUTO_KV_JSON=false CHARSET=UTF-8 KV_MODE=json LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TRUNCATE=100000 category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true Raw JSON data { "ref":"refs/heads/Dev", "before":"d53e9b3cb6cde4253e05019295a840d394a7bcb0", "after":"34c07bcbf557413cf42b601c1794c87db8c321d1", "commits":[ { "id":"a5c816a817d06e592d2b70cd8a088d1519f2d720", "tree_id":"15e930e14d4c62aae47a3c02c47eb24c65d11807", "distinct":false, "message":"rrrrrrrrrrrrrrrrrrrrrr", "timestamp":"2024-08-12T12:00:04-05:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/aaaaaaaaaaaa", "author":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "committer":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "added":[ ], "removed":[ ], "modified":[ "asdafasdad.json" ] }, { "id":"a3b3b6f728ccc0eb9113e7db723fbfc4ad220882", "tree_id":"3586aeb0a33dc5e236cb266c948f83ff01320a9a", "distinct":false, "message":"xxxxxxxxxxxxxxxxxxx", "timestamp":"2024-08-12T12:05:40-05:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/a3b3b6f728ccc0eb9113e7db723fbfc4ad220882", "author":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "committer":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "added":[ ], "removed":[ ], "modified":[ "sddddddf.json" ] }, { "id":"bdcd242d6854365ddfeae6b4f86cf7bc1766e028", "tree_id":"8286c537f7dee57395f44875ddb8b2cdb7dd48b2", "distinct":false, "message":"Updating pipeline: pl_gwp_file_landing_check. Adding Sylvan Performance", "timestamp":"2024-08-12T12:06:10-05:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/bdcd242d6854365ddfeae6b4f86cf7bc1766e028", "author":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "committer":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "added":[ ], "removed":[ ], "modified":[ "asadwefvdx.json" ] }, { "id":"108ebd4ff8ae9dd70e669e2ca49e293684d5c37a", "tree_id":"5a6d71393611718b8576f8a63cdd34ce619f17dd", "distinct":false, "message":"asdrwerwq", "timestamp":"2024-08-12T10:09:33-07:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/108ebd4ff8ae9dd70e669e2ca49e293684d5c37a", "author":{ "name":"dfsd", "email":"l.llllllllllll@aaaaaa.com", "username":"aaaaaa" }, "committer":{ "name":"lllllllllllll", "email":"l.llllllllllll@abc.com", "username":"aaaaaa" }, "added":[ ], "removed":[ ], "modified":[ "A.json", "A.json", "A.json" ] }, { "id":"34c07bcbf557413cf42b601c1794c87db8c321d1", "tree_id":"5a6d71393611718b8576f8a63cdd34ce619f17dd", "distinct":true, "message":"asadasd", "timestamp":"2024-08-12T13:32:45-05:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/34c07bcbf557413cf42b601c1794c87db8c321d1", "author":{ "name":"aaaaaa aaaaaa", "email":"101218171+aaaaaa@users.noreply.github.com", "username":"aaaaaa" }, "committer":{ "name":"GitasdjwqaikHubasdqw", "email":"noreply@gitskcaskadahuqwdqbqwdqaw.com", "username":"wdkcszjkcsebwdqwdfqwdawsldqodqw" }, "added":[ ], "removed":[ ], "modified":[ "a.json", "A1.json", "A1.json" ] } ], "head_commit":{ "id":"34c07bcbf557413cf42b601c1794c87db8c321d1", "tree_id":"5a6d71393611718b8576f8a63cdd34ce619f17dd", "distinct":true, "message":"sadwad from xxxxxxxxxxxxxxx/IH-5942-Pipeline-Change\n\nIh 5asdsazdapeline change", "timestamp":"2024-08-12T13:32:45-05:00", "url":"https://github.com/xxxxxxxxxxxxxxx/AzureWorkload_A00008/commit/3weweeeeeeeee, "author":{ "name":"askjas", "email":"101218171+asfsfgwsrsd@users.noreply.github.com", "username":"asdwasdcqwasfdc-qwgbhvcfawdqxaiwdaszxc" }, "committer":{ "name":"GsdzvcweditHuscwsab", "email":"noreply@gitasdcwedhub.com", "username":"wefczeb-fwefvdszlow" }, "added":[ ], "removed":[ ], "modified":[ "zzzzzzz.json", "Azzzzz.json", "zzzz.json" ] } }
Register here! This thread is for the Community Office Hours session on Security: Risk-Based Alerting on Wed, Oct 2, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to... See more...
Register here! This thread is for the Community Office Hours session on Security: Risk-Based Alerting on Wed, Oct 2, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Splunk Risk-Based Alerting needs, including: Quick guidance set up the foundational and get started with RBA Essential steps of implementing RBA  Best practices for proper creation of risk rules, modifiers, etc. Troubleshooting and optimizing your environment for successful implementation Anything else you’d like to learn!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Hi, We have a custom python service being monitored by APM using the Opentelemetry agent. We have been successful in tracing spans related to our unsupported database driver (clickhouse-driver) but ... See more...
Hi, We have a custom python service being monitored by APM using the Opentelemetry agent. We have been successful in tracing spans related to our unsupported database driver (clickhouse-driver) but are wondering if there is some tag we can use to get APM to recognize these calls as database calls for the purposes of the "Database Query Performance" screen. I had hoped we could just fill out a bunch of the `db.*` semantic conventions but none have so far worked to get it to show as a database call (though the instrumented data do show up in the span details). Any tips?
Is there a way to get a list of valid keys for a stanza? For example: If you get "Invalid key in stanza" for something like: [file_integrity] exclude = /file/path It doesn't like the "exclu... See more...
Is there a way to get a list of valid keys for a stanza? For example: If you get "Invalid key in stanza" for something like: [file_integrity] exclude = /file/path It doesn't like the "exclude" but is there an alternative "key" value to accomplish the same? Thanks in advance!  
I'm trying to achieve the following output using the table command, but am hitting a snag.  Vision ID Transactions Good % Good Fair % Fair Unacceptable % Unacceptable Average ... See more...
I'm trying to achieve the following output using the table command, but am hitting a snag.  Vision ID Transactions Good % Good Fair % Fair Unacceptable % Unacceptable Average Response Time Report Date ABC STORE (ABCD) 159666494 159564563 99.9361601 101413 0.063515518 518 0.000324426 0.103864001 Jul-24 Total 159666494 159564563 99.9361601 101413 0.063515518 518 0.000324426 0.103864001 Jul-24                     Thresholds   response <= 1s   1s < response <= 3s 3s < response       Here is my broken query: index=etims_na sourcetype=etims_prod platformId=5 bank_fiid = ABCD | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Total count(eval(response_time<=1)) AS "Good" count(eval(response_time<=2)) AS "Fair" count(eval(response_time>2)) AS "Unacceptable" avg(response_time) AS "Average" BY Vision_ID | eval %Good= round((Good/total)*100,2), %Fair = round((Fair/total)*100,2), %Unacceptable = round((Unacceptable/total)*100,2) | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table "Vision_ID", "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" The help is always appreciated. Thanks!
In 2022, I made the decision to focus my career on OpenTelemetry.  I was excited by the technology and, after working with proprietary APM agent technology for nearly a decade, I believed that it was... See more...
In 2022, I made the decision to focus my career on OpenTelemetry.  I was excited by the technology and, after working with proprietary APM agent technology for nearly a decade, I believed that it was the future of instrumentation.  This ultimately led me to join Splunk in 2023 as an Observability Specialist.  Splunk Observability Cloud is OpenTelemetry-native, so this role allowed me to work extensively with OpenTelemetry as customers of all sizes implemented it in their organizations.  So how am I feeling about OpenTelemetry in 2024?  Well, I’m even more excited about it than before!  In this article, I’ll share the top three things that I love about OpenTelemetry.      #1:  It’s Easy to Use!  In the beginning, instrumenting an application with OpenTelemetry required code changes.  This is referred to as code-based instrumentation or manual instrumentation.   It was great for early enthusiasts who were passionate about observability, wanted full control over their telemetry, and didn’t mind spending time instrumenting their code by hand.   OpenTelemetry has come a long way since then, with some form of auto-instrumentation support for the most popular languages such as Java, .NET, Node.js, and Python.  This is also referred to as zero-code solutions.  This makes it easier for organizations to get up and running quickly with OpenTelemetry, just like they would with a proprietary solution based on traditional APM-agents, while also giving them the flexibility to layer on custom instrumentation if desired.  The advent of the OpenTelemetry Operator for Kubernetes has also made it easier to instrument applications running in Kubernetes.  Specifically, the OpenTelemetry Operator can automatically inject and configure instrumentation libraries for multiple languages.  This makes it simple for organizations using Kubernetes to instrument their applications using OpenTelemetry.  Ultimately, these ease of use improvements have made OpenTelemetry more accessible, and have dramatically reduced the time to value, as it’s now possible to be up and running with OpenTelemetry in just minutes.    #2:  Flexibility of the OpenTelemetry Collector Architecture  The collector is my favorite part of OpenTelemetry (and perhaps the most flexible yet elegant architecture I’ve encountered in my career).  While the concept of the collector originated in 2017 as part of the OpenCensus project at Google, with OpenTelemetry it has evolved to become a mature and highly flexible software component that many organizations depend on.  The OpenTelemetry Collector allows one or more pipelines to be configured, which define how data is received, processed, and exported.  This data can include metrics, traces, and logs.  A pipeline can be depicted as follows:   Source:  https://opentelemetry.io/docs/collector/architecture/  Each pipeline can have one or more Receivers that receive metric, trace, or log data from various sources.  The data in the pipeline then passes through one or more Processors, which can transform, filter, or manipulate the data in other desired ways.  Finally, the data is then exported using Exporters to one or more observability backends, which allows organizations to decide exactly where they want to send their observability data.  This architecture provides near infinite flexibility.  For example, if you want to send your metrics to one observability backend and your traces and logs to another, no problem!  Or if you want to send a subset of traces to a backend that resides in a particular jurisdiction, to comply with data residency requirements, sure!   Here are a few additional examples of what you can do with the collector:  Use the Resource Detection Processor to gather additional information about the host it’s running on, and add it as context to metrics, spans, and logs.  Use the Redaction Processor to redact sensitive data before it leaves your network.   Use the Transform Processor to rename span attributes, to ensure naming conventions are enforced across all of your observability data.  The architecture also allows collectors to be chained.  Typically, this means running a collector in agent mode on each host or Kubernetes node to gather data from applications running on that host along with infrastructure-related data.  These collectors will then export their data to another collector running in gateway mode, which will perform additional processing on the data before it’s exported to one or more observability backends.  This collector architecture is depicted in the following diagram:  Source:  https://opentelemetry.io/docs/collector/architecture/    #3:  Support for Logs   While metrics and traces have been Generally Available (GA) in OpenTelemetry for several years, it wasn’t until November 2023 that logs joined these other signals and became GA as well.   This was a tremendous step forward, as logs play a critical role in the troubleshooting process, frequently providing the details that engineers need to understand why a particular issue is occurring.  I love that OpenTelemetry provides so many different ways to ingest logs, including support for Fluent Bit and Fluentd with the Fluent Forward Receiver.  Or the versatile Filelog Receiver, which can be configured to ingest logs from just about any file-based source.  And it gets even better with Kubernetes, which now includes a Logs Collection Preset.  This preset, which is available in the collector Helm chart, uses the Filelog Receiver under the hood, and provides all of the configuration needed to automatically collect logs using the standard output from Kubernetes containers.  Collecting logs with OpenTelemetry means that we can apply all of the power and flexibility of the collector that was discussed in the previous section to logs.  And the metrics, traces, and logs that are collected with OpenTelemetry share the same Semantic Conventions, which makes it possible to correlate these different signals together.   For example, with Semantic Conventions, log events that include the TraceId, SpanId, and TraceFlags fields can now be linked to the corresponding trace data.  This makes it easy to jump between related logs, traces, and metric data when troubleshooting an issue, where time is of the essence.  I also love that some languages such as Java have started to collect logs automatically with OpenTelemetry.  So there’s no need to even use the Filelog Receiver to ingest the application logs, as everything is captured using OpenTelemetry SDKs under the hood.  In addition to requiring less configuration effort, collecting logs in this manner is also more performant, as there’s no need to read application log files from the host filesystem and parse them.    Summary Thanks for taking the time to hear my thoughts on OpenTelemetry.  Please leave a comment or reach out to let us know what you love about OpenTelemetry.   
How to Encrypt and Secure Your Machine Agent AccessKey Encryption and securing your credentials are of utmost importance now. The Machine Agent lets you configure the encryption of your AccessKey... See more...
How to Encrypt and Secure Your Machine Agent AccessKey Encryption and securing your credentials are of utmost importance now. The Machine Agent lets you configure the encryption of your AccessKey. Let's go through the steps.   1. Navigate to <Ma-Home> directory and Let's create a keyStore with below command: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar generate_ks -filename '/opt/appdynamics/secretKeyStore' -storepass 'MyCredentialStorePassword' This will create the keyStore for you. The output should look like: Successfully created and initialized new KeyStore file: /opt/appdynamics/secretKeyStore 2. Let's create a password to access this keyStore: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar obfuscate -plaintext 'MyCredentialStorePassword' The output should look like: s_-001-12-oRQaGjKDTRs=xxxxxxxxxxxxx= 3. Encrypt your AccessKey: jre/bin/java -jar lib/secure-credential-store-tool-1.3.23.jar encrypt -filename /opt/appdynamics/secretKeyStore -storepass 'MyCredentialStorePassword' -plaintext 'xxxxxx' The output should look like: -001-24-mEZsR+xxxxxxxxx==xxxxxxxxxxxx== Great, now let's edit <MA-Home>/conf/controller-info.xml file and edit the accessKey while also adding a few more parameters for the encryption <account-access-key>-001-24-mEZsR+nrScSXlewlZbTQgg==xxxxxxxxx==</account-access-key> <credential-store-password>s_-001-12-xxxxxxx=xxxxxx=</credential-store-password> <credential-store-filename>/opt/appdynamics/secretKeyStore</credential-store-filename> <use-encrypted-credentials>true</use-encrypted-credentials> Great work! You can deploy your Machine Agent now!
Hello, In need download splunk enterprise 7.2.* in order to upgrade from version 6.6. Where can i find the older versions?   Thank you
Looking to add tooltip sting of site names included in the same lookup file as the long lat on a cluster map.   IS this even possible?
hello, as per https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/EnableforwardingonaSplunkEnterpriseinstance where are the files like outputs, props and transforms stored? i am using spl... See more...
hello, as per https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/EnableforwardingonaSplunkEnterpriseinstance where are the files like outputs, props and transforms stored? i am using splunk web enterprise. Also where is my $splunk_home? am trying to setup heavy forwarding to send indexed data to a database on a schedule. thanks
I'm using the Splunk TA for linux to collect serverlogs. Some background Looking in the "_internal" log I am seing a lot of these errors: 08-23-2024 15:52:39.910 +0200 WARN DateParserVerbose [6460... See more...
I'm using the Splunk TA for linux to collect serverlogs. Some background Looking in the "_internal" log I am seing a lot of these errors: 08-23-2024 15:52:39.910 +0200 WARN DateParserVerbose [6460 merging_0] - A possible timestamp match (Wed Aug 19 15:39:00 2015) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13275 08-23-2024 15:52:39.646 +0200 WARN DateParserVerbose [6460 merging_0] - A possible timestamp match (Fri Aug 7 09:08:00 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13418 08-23-2024 15:52:32.378 +0200 WARN DateParserVerbose [6506 merging_1] - A possible timestamp match (Fri Aug 7 09:09:00 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13338 This is slightly confusing and somewhat problematic as  the "lastlog" is collected not through a filewatch but from scripted output. The "lastlog" file is not collected/read and a stats-check on the file confirms accurate dates. However, this is not the source of the problem. I cannot se anything in the output from the commands in the script (Splunk_TA_nix/bin/lastlog.sh) which would indicate the precense of a "year"/timestamp. The indexed log does not contain "year" and the actual _time timestamp is correct. These "years" in "_internal" are also from a time when the server was not running/present, so they are not collected from any actual source "on the server". And the questions - Why am I seeing these errors - From where are these problematic "timestamps" generated - How do I fix the issue All the best  
Hello,   I want to create a dataset for Machine Learning, I want kpi name and Service Health Score as field name and their value as value for last 14 days, how do i retrieve kpi_value and health_... See more...
Hello,   I want to create a dataset for Machine Learning, I want kpi name and Service Health Score as field name and their value as value for last 14 days, how do i retrieve kpi_value and health_score value, is it stored somewhere in itsi index? I cannot find kpi_value field in index=itsi_summary #predictive analaytics #machine learning, splunk it #predictive analytic  Splunk Machine Learning Toolkit  #Splunk ITSI Also, if you have done Machine Learning / Predictive ANalytics in your environment, please suggest a approach 
What do we use for the Base URL when configuring the App's Add-on Settings? Should this be left to slack.com/api as default?
i am facing error while running datamodel below The search job has failed due to err='Error in 'SearchParser': The search specifies a macro 'isilon_index' that cannot be found.    l
Why can't I open the Support Portal page? I am having trouble referencing a case.
Hi Team, We could see latency in logs Log ingestion via - syslog Network devices --> Syslog server --> splunk  Using below query, we could see minimum 10 mins to maxminum 60 mins log la... See more...
Hi Team, We could see latency in logs Log ingestion via - syslog Network devices --> Syslog server --> splunk  Using below query, we could see minimum 10 mins to maxminum 60 mins log latency index="ABC" sourcetype="syslog" source="/syslog*" | eval indextime=strftime(_indextime,"%c") | table _raw _time indextime What should be our next steps to check where the latency is and how to fix it?
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS... See more...
this is inputs.conf  [monitor://D:\temp\zkstats*.json] crcSalt = <SOURCE> disabled = false followTail = 0 index = abc sourcetype = zk_stats props.conf [zk_stats] KV_MODE = json INDEXED_EXTRACTIONS = json however my search code index=abc sourcetype = zk_stats is not getting new events. meaning to say if zkstats20240824_0700 for example new files coming in it wont re index
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message... See more...
Hi, I am currently dealing with some logs being forwarded via syslog to a third party system. The question is if there is an option to prevent splunk from adding an additional header to each message before it is forwarded. So there should be a way to disable the additional syslog header when using forwarding, so that the third party system receives the original message by removing the header. Any ideas, can you give me a practical example? I am trying to test by modifying the outputs.conf.  thanks, Giulia