All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am stuck at this point -- Click the Akamai Security Incident Event Manager API. I can't find this in data inputs after installing add-on.
@splunklearnerIf you don't have a heavy forwarder and need to install the add-on, you can install it on the search head cluster. Please refer to the documentation below for more details and installat... See more...
@splunklearnerIf you don't have a heavy forwarder and need to install the add-on, you can install it on the search head cluster. Please refer to the documentation below for more details and installation instructions. Install an add-on in a distributed Splunk Enterprise deployment - Splunk Documentation To deploy an add-on to the search head cluster members, use the deployer. https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/PropagateSHCconfigurationchanges   
@splunklearner I recommend using the add-on. Akamai SIEM Integration | Splunkbase
@splunklearner    Install the add-on on your heavy forwarder and configure it. You have two options for sending logs to Splunk: Install the add-on on your heavy forwarder and use it to send logs t... See more...
@splunklearner    Install the add-on on your heavy forwarder and configure it. You have two options for sending logs to Splunk: Install the add-on on your heavy forwarder and use it to send logs to Splunk. If Akamai supports syslog, you can send logs to your syslog forwarder, which will then forward them to Splunk. In this case, please configure syslog-ng or rsyslog to capture Akamai logs in a specific directory and create the necessary inputs to onboard the logs into Splunk. Configure the UF on your syslog server to monitor the log files. Update the inputs.conf file to specify the log file paths and the outputs.conf file to forward the data to your indexe Example inputs.conf: [monitor:///var/log/akamai/*.log] index = akamai sourcetype = akamaisiem  
@splunklearner  Please follow this SIEM Splunk connector
Anyone please help me how to get Akamai logs to Splunk. We have clustered environment with syslog server uf installed in it and forwards data to our Deployment Server initially and then it deployes t... See more...
Anyone please help me how to get Akamai logs to Splunk. We have clustered environment with syslog server uf installed in it and forwards data to our Deployment Server initially and then it deployes to Cluster Manager and Deployer. We have 6 indexers with 2 indexers in each site (3 site multi cluster). 3 search heads one in each site. How to proceed with this?
Hi @sol69  Please find the following instructions for configuring the add-on Prerequisites Wireshark Installation Download and install Wireshark. During the installation process, deselect all... See more...
Hi @sol69  Please find the following instructions for configuring the add-on Prerequisites Wireshark Installation Download and install Wireshark. During the installation process, deselect all components except for tshark (this is the command-line tool needed for packet capture), unless you have other reasons for installing the full package. TA-tshark app Installation Install the TA-tshark add-on on your Universal Forwarder (UF). After installation, ensure you configure the add-on to forward the necessary data. Configuration Steps Modify Configuration Files inputs.conf: Locate the file (often included in the app package). If needed, modify the configuration—by default, it is set up for Windows to capture traffic on port 53 (DNS) on the first interface. The input is defined with the name tshark:port53 and a specified sourcetype. bin/tcpdump.path: Adjust this file if your environment requires a different tcpdump/tshark path than what is provided. Enable Packet Capture In the inputs.conf file, find the stanza corresponding to the capture input. Set disabled = 0 to enable the capture feature. Restart the Universal Forwarder (UF) After making all changes, restart the UF to apply the new configuration settings. Optional: Additional Apps for Enhanced Functionality For further insights and to extend the functionality of the installed app, consider installing the following complementary Splunk apps: DNS Insight DNS Insight on Splunkbase DHCP Insight DHCP Insight on Splunkbase These apps provide additional analysis and visualization capabilities related to DNS and DHCP traffic. Note - How you install the app on your UF may depend on your architecture - are you using a Deployment Server to distribute apps to your UF(s)?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for the reply, @yuanliu. Sadly I don't know whether it's actually json; it might be. It's a college assignment, and we just know it's a bunch of data/logs in tar.gz. "src_ip" and the other one... See more...
Thanks for the reply, @yuanliu. Sadly I don't know whether it's actually json; it might be. It's a college assignment, and we just know it's a bunch of data/logs in tar.gz. "src_ip" and the other one have never appeared automatically in interesting fields so far. Would you expect them to appear as their "natural names" if it was json or would I need to do something proactive? Either way, why doesn't the extracted field appear?
@Rakzskull  Splunk manages the archival storage in DDAA, and you don’t have direct access to the underlying S3 buckets. To export archived data: Open a support ticket with Splunk.
@sol69  I recommend exploring an alternative method for forwarding the data, as this add-on or app does not appear to be CIM-compliant. It would be best to review this documentation for more details... See more...
@sol69  I recommend exploring an alternative method for forwarding the data, as this add-on or app does not appear to be CIM-compliant. It would be best to review this documentation for more details. https://community.splunk.com/t5/Splunk-Enterprise/Monitoring-Wireshark-usage-with-splunk/m-p/690530  https://community.splunk.com/t5/Monitoring-Splunk/Splunk-monitoring-a-wireshark-file/td-p/14218 
@sol69  To configure the inputs.conf for the TA_tshark (Network Input for Windows) on Splunk, follow these steps: Install TA_tshark: Install the TA_tshark on your Universal Forwarder (UF) and ... See more...
@sol69  To configure the inputs.conf for the TA_tshark (Network Input for Windows) on Splunk, follow these steps: Install TA_tshark: Install the TA_tshark on your Universal Forwarder (UF) and configure forwarding. Modify inputs.conf: Open the inputs.conf file located in $SPLUNK_HOME/etc/apps/TA_tshark/local/ (create the file ). Add the following configuration to capture DNS traffic on port 53: [script://<give your path>] disabled = 0 index = your_index sourcetype = tshark:port53 Ensure the disabled attribute is set to 0 to enable the input. Modify tcpdump.path: If needed, update the bin/tcpdump.path file to point to the correct path of tshark. Restart the Universal Forwarder: After making these changes, restart the Universal Forwarder to apply the new configuration. inputs.conf - Splunk Documentation 
@shabamichae In the Splunk Architect practical lab exam, configuring TLS/SSL for Universal Forwarder (UF) to Indexer (IDX) communication is not strictly required unless explicitly mentioned in the ex... See more...
@shabamichae In the Splunk Architect practical lab exam, configuring TLS/SSL for Universal Forwarder (UF) to Indexer (IDX) communication is not strictly required unless explicitly mentioned in the exam requirements. If the exam explicitly states that secure communication must be configured, then failing to implement SSL/TLS for UF-IDX traffic could result in deductions. Since time is limited, focus on core configurations (indexing, forwarding, clustering, search head deployment) first, then handle TLS if necessary.
In the practical Lab environment, how important is it to configure TLS on Splunk servers during the practical Lab. Do i get penalized for not securing UF-IDX traffic using SSL/TLS 
In the practical Lab environment, how important is it to configure TLS on Splunk servers during the practical Lab, is it mandatory to configure TLS on my environments?
How do I configure the inputs.conf for  Ta_tshark TA_tshark (Network Input for Windows) | Splunkbase
Hi @jonxilinx, The aws:cloudwatch:guardduty source type was intended to be used with a CloudWatch Logs input after a transform from the aws:cloudwatchlogs source type. To use an SQS input, you can ... See more...
Hi @jonxilinx, The aws:cloudwatch:guardduty source type was intended to be used with a CloudWatch Logs input after a transform from the aws:cloudwatchlogs source type. To use an SQS input, you can transform the data on your heavy forwarder. The configuration below works on the following event schema: { "BodyJson": { "version": "0", "id": "cd2d702e-ab31-411b-9344-793ce56b1bc7", "detail-type": "GuardDuty Finding", "source": "aws.guardduty", "account": "111122223333", "time": "1970-01-01T00:00:00Z", "region": "us-east-1", "resources": [], "detail": { ... } } } You may need to adjust the configuration to match your specific input and event format. # local/inputs.conf [my_sqs_input] aws_account = xxx aws_region = xxx sqs_queues = xxx index = xxx sourcetype = aws:sqs interval = xxx # local/props.conf [aws:sqs] TRANSFORMS-aws_sqs_guardduty = aws_sqs_guardduty_remove_bodyjson, aws_sqs_guardduty_to_cloudwatchlogs_sourcetype # local/transforms.conf [aws_sqs_guardduty_remove_bodyjson] REGEX = "source"\s*\:\s*"aws\.guardduty" INGEST_EVAL = _raw:=json_extract(_raw, "BodyJson") [aws_sqs_guardduty_to_cloudwatchlogs_sourcetype] REGEX = "source"\s*\:\s*"aws\.guardduty" DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:cloudwatchlogs:guardduty  
This is a little confusing.  There is nothing to prevent multivalue fields from being used in lookup.  There is no need to mvexpand.  All you need to do is   | lookup whitelistdomains url as emailD... See more...
This is a little confusing.  There is nothing to prevent multivalue fields from being used in lookup.  There is no need to mvexpand.  All you need to do is   | lookup whitelistdomains url as emailDomains output url as match    The above assumes that whitelistdomains contain a field named url. for this match job. To demonstrate, I'm using a lookup table from a previous question called all_urls.  It's content is as follows: url www.url1.com *.url2.com site.url3.com This is an emulation - I just changed lookup name from the above   | makeresults | fields - _time | eval emailDomains = mvappend("www.url1.com", "site.url3.com", "www.url3.com") ``` data emulation above ``` | lookup all_urls url as emailDomains output url as match   This gives emailDomains match www.url1.com site.url3.com www.url3.com www.url1.com site.url3.com
Forget your extractions.  As the code snippet looks exactly like trying to use regex to extract from JSON.  Could you clarify whether the full raw event is in JSON? If it is, do not use regex.  If JS... See more...
Forget your extractions.  As the code snippet looks exactly like trying to use regex to extract from JSON.  Could you clarify whether the full raw event is in JSON? If it is, do not use regex.  If JSON is just part of event, the best option is to use extraction to extract the part that is JSON instead of directly extracting information fragment.
Thank you for illustrating the use case clearly with sample data, logic, and expected result from sample.  But you also want to specify if Json1 and json2 are in the same row/event.  Here is a soluti... See more...
Thank you for illustrating the use case clearly with sample data, logic, and expected result from sample.  But you also want to specify if Json1 and json2 are in the same row/event.  Here is a solution if they are.   | table Json1 json2 | transpose 0 column_name=name | spath input="row 1" | fields - "row 1" | foreach *{} [eval <<MATCHSTR>>_array = mv_to_json_array('<<FIELD>>')] | fillnull value=null | fields - *{} | stats list(*) as * | foreach * [eval "<<FIELD>>" = if(mvcount(mvdedup('<<FIELD>>')) < 2, null(), '<<FIELD>>')] | transpose 0 column_name=KeyName | search "row 1" = * | eval KeyName = if(KeyName LIKE "%_array", replace(KeyName, "_array$", "{}"), KeyName) | eval "Old Value" = mvindex('row 1', 0), "New Value" = mvindex('row 1', 1) | fields - "row 1" | foreach *Value [eval <<FIELD>> = if('<<FIELD>>' != "null", '<<FIELD>>', if(KeyName LIKE "%{}", "[]", null()))]   Here is an emulation you can play with and compare with real data.   | makeresults | fields - _time | eval Json1 = "{ \"id\": \"XXXXX\", \"displayName\": \"ANY DISPLAY NAME\", \"createdDateTime\": \"2021-10-05T07:01:58.275401+00:00\", \"modifiedDateTime\": \"2025-02-05T10:30:40.0351794+00:00\", \"state\": \"enabled\", \"conditions\": { \"applications\": { \"includeApplications\": [ \"YYYYY\" ], \"excludeApplications\": [], \"includeUserActions\": [], \"includeAuthenticationContextClassReferences\": [], \"applicationFilter\": null }, \"users\": { \"includeUsers\": [], \"excludeUsers\": [], \"includeGroups\": [ \"USERGROUP1\", \"USERGROUP2\" ], \"excludeGroups\": [], \"includeRoles\": [], \"excludeRoles\": [] }, \"userRiskLevels\": [], \"signInRiskLevels\": [], \"clientAppTypes\": [ \"all\" ], \"servicePrincipalRiskLevels\": [] }, \"grantControls\": { \"operator\": \"OR\", \"builtInControls\": [ \"mfa\" ], \"customAuthenticationFactors\": [], \"termsOfUse\": [] }, \"sessionControls\": { \"cloudAppSecurity\": { \"cloudAppSecurityType\": \"monitor\", \"isEnabled\": true }, \"signInFrequency\": { \"value\": 1, \"type\": \"hours\", \"authenticationType\": \"primaryAndSecondaryAuthentication\", \"frequencyInterval\": \"timeBased\", \"isEnabled\": true } } }", json2 = "{ \"id\": \"XXXXX\", \"displayName\": \"ANY DISPLAY NAME 1\", \"createdDateTime\": \"2021-10-05T07:01:58.275401+00:00\", \"modifiedDateTime\": \"2025-02-06T10:30:40.0351794+00:00\", \"state\": \"enabled\", \"conditions\": { \"applications\": { \"includeApplications\": [ \"YYYYY\" ], \"excludeApplications\": [], \"includeUserActions\": [], \"includeAuthenticationContextClassReferences\": [], \"applicationFilter\": null }, \"users\": { \"includeUsers\": [], \"excludeUsers\": [], \"includeGroups\": [ \"USERGROUP1\", \"USERGROUP2\", \"USERGROUP3\" ], \"excludeGroups\": [ \"USERGROUP4\" ], \"includeRoles\": [], \"excludeRoles\": [] }, \"userRiskLevels\": [], \"signInRiskLevels\": [], \"clientAppTypes\": [ \"all\" ], \"servicePrincipalRiskLevels\": [] }, \"grantControls\": { \"operator\": \"OR\", \"builtInControls\": [ \"mfa\" ], \"customAuthenticationFactors\": [], \"termsOfUse\": [] }, \"sessionControls\": { \"cloudAppSecurity\": { \"cloudAppSecurityType\": \"block\", \"isEnabled\": true }, \"signInFrequency\": { \"value\": 2, \"type\": \"hours\", \"authenticationType\": \"primaryAndSecondaryAuthentication\", \"frequencyInterval\": \"timeBased\", \"isEnabled\": true } } }" ``` data emulation above ```   The above search gives KeyName New Value Old Value conditions.users.excludeGroups{} ["USERGROUP4"] [] conditions.users.includeGroups{} ["USERGROUP1","USERGROUP2","USERGROUP3"] ["USERGROUP1","USERGROUP2"] displayName ANY DISPLAY NAME 1 ANY DISPLAY NAME modifiedDateTime 2025-02-06T10:30:40.0351794+00:00 2025-02-05T10:30:40.0351794+00:00 name json2 Json1 sessionControls.cloudAppSecurity.cloudAppSecurityType block monitor sessionControls.signInFrequency.value 2 1 For the life of me I cannot figure where does ModifiedDateTime differ.  They look identical to me. We can go more semantic with SPL but as you want the {} notation intact, this is perhaps the most direct.
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Own... See more...
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Owner: sc_admin journal : EXTRACT-destip Inline "dest_ip\":\"(?P<destip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search Global | Permissions Enabled object should appear: all apps permissions: apps r/w, sc_admin r/w   journal : EXTRACT-srcip Inline "src_ip\":\"(?P<srcip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search App | Permissions Enabled object should appear: this app only (search) permissions: sc_admin r/w   After Add data from a tar.gz file upload, splunkcloud (login as sc_admin)>search>interesting fields: all fields:all fields doesn't include those fields. What am I missing? Btw, if I extract new fields with the same names it objects because they already exist.