All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Instead of looking in the app, please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow t... See more...
Instead of looking in the app, please try clicking on Settings -> then click on Data Inputs and then look for Akamai​ Security Incident Event Manager API. Once you locate it, click on it and follow the instructions mentioned on this page: https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector#install-the-splunk-connector
There is no UF add-on specific to ES.  ES can produce an add-on for your indexers, but that method can be used only in limited circumstances.  See https://docs.splunk.com/Documentation/ES/7.2.0/Insta... See more...
There is no UF add-on specific to ES.  ES can produce an add-on for your indexers, but that method can be used only in limited circumstances.  See https://docs.splunk.com/Documentation/ES/7.2.0/Install/InstallTechnologyAdd-ons#Deploy_add-ons_to_forwarders for when it can be used and alternatives for other environments.  I recommend manual installation of add-ons.
Again - there is no such thing as "add on for UF". There are several different add-ons (which you install on various components of your Splunk Infrastructure, including UFs) needed for specific solut... See more...
Again - there is no such thing as "add on for UF". There are several different add-ons (which you install on various components of your Splunk Infrastructure, including UFs) needed for specific solution you want to ingest data from. So if you want to process logs from Checkpoint firewalls, you use TA for Checkpoint. If you get logs from Proofpoint you install UF for Proofpoint. And so on.  
can u share the TA UF, specifically used for ES? Or the download link or any helpful screenshot
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two ... See more...
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two Linux Server, One for Platform Admin & Controller. The Second Server for Event Service.) I have successfully add the Event Service Server Host in Hosts Tab through OPENSSH between Two Servers.While Installing Event Service I got the connection timeout error , unable to ping. So i tried changing the property values in your events-services-api-store.properties file to IP address instead of hostnames. Then Add the following environment variable to the new user. export INSTALL_BOOTSTRAP_MASTER_ES8=true After that I Restart the Event-Service(ES) manually using below command from event-service/processor directory. bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & After following above steps, I get below error in Enterprise Console, while starting the Event Service Please help me resolved this issue....
If it's text then Splunk can ingest it.  How to ingest it is another matter. There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splu... See more...
If it's text then Splunk can ingest it.  How to ingest it is another matter. There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC).
@bschaap I'm also facing the same issue. I used that sample OTLP spec log json file. How did you fix it?
Need to create a user in Splunk ITSI with below access - Read-only access to all glass table and dashboards No export functionality enabled No drilldown further functionality available. When I am... See more...
Need to create a user in Splunk ITSI with below access - Read-only access to all glass table and dashboards No export functionality enabled No drilldown further functionality available. When I am adding certain capabilities to the role, it is restricting me to access ITSI only. Can someone please suggest the best way to work this out?
| where Status=="FILE_DELIVERED" and then alert while there are no results.
I have some custom metrics and all of them work ok on my SAAS, when I'm going to test it, work ok: When I going to create the dashboard, I can create the graphics, but always with the same query... See more...
I have some custom metrics and all of them work ok on my SAAS, when I'm going to test it, work ok: When I going to create the dashboard, I can create the graphics, but always with the same querys the "No data available" The other queries are exactly the same and the graphics haven´t any problem, the difference is just in the final part where we change 0 by 3 The outcome is just a integer number and Appdynamics show me like a succesfull test What could be the problem?
Hello,   index=* "My-Search-String" |rex "My-Regex"| eval Status=if(like (my-rex-extractor-field,"xxx-yyyy%"), "FILE_DELIVERED", "FILE_NOT_DELIVERED")|table Status I need to run the above bet... See more...
Hello,   index=* "My-Search-String" |rex "My-Regex"| eval Status=if(like (my-rex-extractor-field,"xxx-yyyy%"), "FILE_DELIVERED", "FILE_NOT_DELIVERED")|table Status I need to run the above between 5-7 AM alert via email. Although the file arrives around 05:15 AM, I want to continue running this as an alert until 07 AM because the alert should continue to state the status to avoid missing and this will be detrimental if the status continues to be FILE_NOT_DELIVERED But the problem here is the alert continues to output FILE_NOT_DELIVERED albeit containing FILE_DELIVERED in the ouput Current behaviour - when the alert triggers at 05:45 AM - alert set to run as cron schedule - every 15 mins FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED Expected behaviour as soon as the SPL finds FILE_DELIVERED, for all subsequent runs the FILE_NOT_DELIVERED result should be suppressed and the SPL should continue to return FILE_DELIVERED How do I achieve this please?    
Actually it's a bit unclear and calls for clarification indeed. The https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles document doesn't mention the app.conf f... See more...
Actually it's a bit unclear and calls for clarification indeed. The https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles document doesn't mention the app.conf file in shcluster directory at all. Only the https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges#Set_the_deployer_push_mode specifies that app.conf for a given app must be configured in app/local/app.conf So my understanding is that global push mode is processed with normal precedence rules and can be set anywhere in the "normal" chain of config parsing and overlaying but the overwrite for a specific app must be placed in this app's local directory.
Hi Have you check that your server.pem is still valid? Anyhow there should be log entry on mogodb.log if this is the issue. You also have enough free disk space on your node? r. Ismo
Can Splunk ingest log data from HCL Domino and Notes?
Yes you can run it on your macOS but cannot run it on e.g. vmware Linux VMs (like this Kali linux) which is based on ARM.
Nice found! How I read it is if it is under $SPLUNK_HOME/etc/system/local  on individual SHC node / deployer(?) then it's global, but when it's under $SPLUNK_HOME/etc/shcluster/apps/<app>/ then is l... See more...
Nice found! How I read it is if it is under $SPLUNK_HOME/etc/system/local  on individual SHC node / deployer(?) then it's global, but when it's under $SPLUNK_HOME/etc/shcluster/apps/<app>/ then is local for that app. It's not a dependent on default vs local inside one app! So if it's on your $SPLUNK_HOME/etc/shcluster/apps/<app>/ then it should be only local for that one app not for all apps. If this is depending on default vs. local folder inside any apps inside etc/shcluster/apps folder then is't not what is said on doc and it should reported as bug/error in docs.
The search they are running is index=* cloudtrail<bucketnumber>* across a 7 day period. Environment Details: We are using the Splunk Add-on for AWS on a search head cluster, On-prem. On rev... See more...
The search they are running is index=* cloudtrail<bucketnumber>* across a 7 day period. Environment Details: We are using the Splunk Add-on for AWS on a search head cluster, On-prem. On review of the inspect job log, it looks like one user's search is reaching out to source=s3://<aws smart store info> and the other users search is only searching the local indexes. Resulting in a drastic event return difference of 76 results vs 8500 during the same time period. Steps I've taken: I checked the app they are searching in and roles for each user (they are identical) I checked the user folder in Splunk, their settings are the same even down to the time zone.  Even tried adding the index name to the search and having the user with missing logs re-run it. Still no change in her results and the job logs are showing it is not reaching out to S3.  Is there something I am missing? Is this an AWS app setting that I need to adjust? I would appreciate any thought you may have on this. Thanks! 
If you had | eval recipients="employee1@mail.se,employee2@mail.se" working, why couldn't you just make a macro containing the whole eval recipients="employee1@mail.se,employee2@mail.se" ? This w... See more...
If you had | eval recipients="employee1@mail.se,employee2@mail.se" working, why couldn't you just make a macro containing the whole eval recipients="employee1@mail.se,employee2@mail.se" ? This way you'd just do <yoursearch> | `your_addresses_macro`  And be done with this?
Hi Since 9.x there have been a feature "redundancy cluster master" https://docs.splunk.com/Documentation/Splunk/latest/Indexer/CMredundancy. That doc's and what @gcusello pointed here don't said th... See more...
Hi Since 9.x there have been a feature "redundancy cluster master" https://docs.splunk.com/Documentation/Splunk/latest/Indexer/CMredundancy. That doc's and what @gcusello pointed here don't said that this didn't work also for multisite clusters! Actually the first doc said that his can do also with multisite cluster. There are some new issues which you must take care of e.g. LB configuration, but this should be doable. There seems to be some differences between those docs and https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Managersitefailure, so doc feed back is needed (sent). r. Ismo  
I run x86_64 Splunk on my M2 Mac.  MacOS automatically translates the instructions.