All Posts

Top

All Posts

Hi @slider8p2023 , good for you, see next time! I still don't use Dashboard Studio because it doesn't still have all the features I use of the Classical Dashboard! Ciao and happy splunking Giusep... See more...
Hi @slider8p2023 , good for you, see next time! I still don't use Dashboard Studio because it doesn't still have all the features I use of the Classical Dashboard! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks @gcusello that seemed to work. I cloned the original dashboard panel by panel and saved it as a NON Dashboard studio dashboard. The schedule to export as PDF. I was un-aware the scheduling of... See more...
Thanks @gcusello that seemed to work. I cloned the original dashboard panel by panel and saved it as a NON Dashboard studio dashboard. The schedule to export as PDF. I was un-aware the scheduling of PDF exporting is not available in using Dashboard Studio.   
Also , i have the following error which is generated for only one previous alert , if you could please look and see what other steps I can take , if that helps   2024-04-18 05:18:47,938 +0000 ERROR... See more...
Also , i have the following error which is generated for only one previous alert , if you could please look and see what other steps I can take , if that helps   2024-04-18 05:18:47,938 +0000 ERROR sendemail:187 - Sending email. subject="Splunk Alert: ITSEC_Backup_Change_Alert", encoded_subject="Splunk Alert: ITSEC_Backup_Change_Alert", results_link="*****", recipients="['it-security@durr.com']", server="********" @marnall 
Thanks Yuanliu for your quick reply. Yes, I need % sign included. In the email body I need to color the data of percentage column like below: @yuanliu 
Hi @masakazu, in the deploymentclient.conf file of the Cluster Master, you have to add: repositoryLocation = $SPLUNK_HOME/etc/managed-apps in this way the deployment Server, only on the Cluster Ma... See more...
Hi @masakazu, in the deploymentclient.conf file of the Cluster Master, you have to add: repositoryLocation = $SPLUNK_HOME/etc/managed-apps in this way the deployment Server, only on the Cluster Master, doesn't deploy the apps into apps but in managed-apps folder. Then you have to push it by GUI (I'm not sure that's possible to automate it by DS). Put attention that this deploymentclient.conf must be different thant the one on the other clients. Ciao. Giuseppe
Hi @slider8p2023, you could try to clone it going in https://<your_host>/en-US/app/SplunkEnterpriseSecuritySuite/dashboards and cloning the dashboard, but I'm not sure that it's possible to schedule... See more...
Hi @slider8p2023, you could try to clone it going in https://<your_host>/en-US/app/SplunkEnterpriseSecuritySuite/dashboards and cloning the dashboard, but I'm not sure that it's possible to schedule it. Otherwise, you should create a custom clone of the Security Posture dashboard using the searches that you can extract from the original dashboard and then schedule it to send by eMail as a pdf. Ciao. Giuseppe
Yes absolutely , the new alerts or reports that I am creating is unable to get notified through emails...If you have any suggestion kindly help
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the m... See more...
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the manager server to receive data using manager-app? best regards
Here are a few questions asked during the live Tech Talk   Q. What else should I be doing to prepare for the Python upgrade? A.The Splunk Docs page https://docs.splunk.com/Documentation/Splunk/lat... See more...
Here are a few questions asked during the live Tech Talk   Q. What else should I be doing to prepare for the Python upgrade? A.The Splunk Docs page https://docs.splunk.com/Documentation/Splunk/latest/Python3Migration/AboutMigration is the latest and greatest resource for learning more about how to upgrade. We also recently released Splunk Python SDK 2.0.0 with added support for 3.9. That resource is here: https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/   Q. What would you recommend as a cadence of upgrade? We are tight on resources and need to plan in advance. A. I think it is worth noting that every customer will have different constraints and challenges, so there isn't a single right answer to this question. However, the general guidance here would be to think about a few dimensions: (1) It's important to stay on a supported Splunk version, and as the presentation highlighted, there is a 2 year support window after each release comes out. It is therefore prudent to be running on a version that gives sufficient support window prior to your next upgrade. (2) Moving to a new version likely involves some amount of qualification in your environment. How long that qualification may take, along with your organizations operations and critical periods (eg likely you wont want to do upgrades during a peak season (eg Tax Season, or Black Friday / Winter Holidays). Often then, we see customers planning to upgrade to a new major/minor version right after their busy season(s). Our goal has been to make this upgrade as easy as possible by providing the ability to upgrade to a new version without having to sequentially upgrade through all intermediate versions. Do make sure to check the release notes / upgrade guide (for example https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/HowtoupgradeSplunk) for more details on this as this may vary for specific relases. The major/minor version will include features as well as bug fixes and security fixes. The length of time your organization needs to qualify a release will vary based on your own specific system complexity and organization processes. It is however recomended to upgrade to a new major/minor version at least once per year. (3) Staying current with regard to security vulnerabilities is likely an important aspect for you to consider. Many enterprises have specific policy around security vulnerabilities and the time in which they need to be mitigated/remediated. With this in mind, our maintenance releases, which come out roughly every 8 weeks provide updates for issues published in our security advisories (see https://advisory.splunk.com/). The maintenance releases do not contain features and are intended to be much easier to upgrade to, given the surface area of changes is much smaller and thus should require less qualification prior to deployment. It is therefore recommended to stay current on the latest maintenance version for your given version. In summary then, your specific circumstances will likely differ, but it is recomended to upgrade to the latest major.minor version at least once per year, and then to stay current with the maintenance versions for that major/minor release on an ongoing basis.   Q. We are currently on 9.0. Can we upgrade to 9.2 directly, or do we need to upgrade to 9.1 and then 9.2? A.Yes, that is a supported version. You can always find information on what the supported ugrade paths are in Splunk Documentation. For Splunk Enterprise 9.2, what is located here https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/HowtoupgradeSplunk   Q. Are you planning to discontinue "classic Dashboard"? A. As mentioned in the Webinar, the future vision is for Dashboard Studio to become the defacto standard and tool for creating Splunk dashboards. We know many customers are currently on Classic, and are working to get as much feature parity as possible so that customers can migrate.   Q. Will we get to see any more detail about how the certificate store integration works? A. You can find information about Trust Store integration in our Splunk Documentation here: https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/ConfigTLSCertsS2S   Q. Are there any real changes expected to apps when we go up to 3.9 from 3.7? A. In our internal testing across the platform and Splunk-supported apps, we have not seen significant breakage or compatibility issues. We expect customers to be able to migrate to Python 3.9 relatively easily, but we do recommend customers testing it before deploying into Production. The version we are targeting after Python 3.9 is expected to be Python 3.12 - which we know will be a major effort. We continue to publish more information on Splunk Docs as it becomes available.   Q. Will Splunk be able to use the Windows Certificate Store and how do we select which certificate will Splunk use? A. You can find information about Trust Store integration in our Splunk Documentation here: https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/ConfigTLSCertsS2S   Q. Are the Splunk Containers going to be released on the same cadence as the main product? We have seen many more CVEs in the containers than in Splunk. A. Yes, the docker-splunk team maintains the same cadence as Splunk. The Splunk Operator for Kubernetes, which utilizes Docker-Splunk for deployment in Kubernetes environments, also strives to adhere to this cadence. However, occasionally, it may require more time to align with the desired schedule.   Q. What versions of Splunk can we upgrade directly from to 9.2.1? A. You can always find information on what the supported ugrade paths are in Splunk Documentation. For Splunk Enterprise 9.2, what is located here https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/HowtoupgradeSplunk   Q. Aren't the DSs using DFS shares for the apps? A. For our internal testing, we have tested with NFS and can confirm it works. It should extend and work using DFS as well.   Q. Since it can be put behind a load balancer, does replication occur between Deployment servers?   A. All of the DSs are mounted with the same network driver folder for phonehome, client events and apps files. Since the folders are pointing to the same network driver files for all DSs, they can be share each other’s info.  
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_se... See more...
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_server=splunk-index-test-01* ) )
Apparently this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement.
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  ... See more...
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  trigger  saperatly title wise .
@asakhaYou have to adjust your correlation search as per your fields.This is just a reference. Alert when end-users has logged onto the VPN entry point more than 5 times in a day. index=<indexname>... See more...
@asakhaYou have to adjust your correlation search as per your fields.This is just a reference. Alert when end-users has logged onto the VPN entry point more than 5 times in a day. index=<indexname> sourcetype=<sourcetypename> status=success | stats count by user, _time | bin _time as day | where count > 5 | table user, day, count A fail-to-ban feature of IP address if their login fails more than 3times in 1hr. index=<indexname> sourcetype=<sourcetypename> action=failure | stats count as failed_login_count by src_ip, _time span=1h | where failed_login_count > 3 | table src_ip, _time, failed_login_count | eval ban_message="IP address " . src_ip . " exceeded failed login attempts (" . failed_login_count . ")." Weekly Report of End-Users’ IP Addresses Attempting VPN Logins index=vpn_logs sourcetype="your_vpn_sourcetype" | stats count as login_count by user, src_ip, _time span=1w | table user, src_ip, _time, login_count    
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/l... See more...
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/latest/Security/Aboutusersandroles
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field ... See more...
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field based on match.   | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null())   The sample data should give you something like level request_elapsed request_id request_method request_path response_status route info 100 2ca011b5-ad34-4f32-a95c-78e8b5b1a270 GET /orders/123456 500 /order/{orderID} Is this what you wanted? Here is a data emulation you can play with and compare with real data.   | makeresults | eval _raw = "level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500" | extract ``` data emulation above ```   Of course, if for unknown reasons Splunk doesn't give you request_path, simply add an extract command and skip all the rex which is expensive.
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role... See more...
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role based cloudtrail logging rather than aws account. Please put this on your bug litst Splunk.
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvap... See more...
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | foreach *_ip [eval <<FIELD>> = split(<<FIELD>>, "."), <<FIELD>>_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(<<FIELD>>, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(<<FIELD>>, 3))), <<FIELD>> = mvjoin(<<FIELD>>, ".") ``` this last part for display only ```] | fields - offset segment_rev   The sample data gives dst_ip dst_ip_dec src_ip src_ip_dec 192.168.1.100 3232235876 192.168.1.1 3232235777 Here is an emulation you can play with and compare with real data     | makeresults format=csv data="src_ip, dst_ip 192.168.1.1, 192.168.1.100" ``` data emulation above ```     Note: If it helps readability., you can skip foreach and spell the two operations separately.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval src_ip = split(src_ip, ".") | eval dst_ip = split(dst_ip, ".") | eval src_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(src_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(src_ip, 3))) | eval dst_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(dst_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(dst_ip, 3))) | eval src_ip = mvjoin(src_ip, "."), dst_ip = mvjoin(dst_ip, ".") ``` for display only ``` | fields - offset segment_rev        
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times... See more...
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times, can cause abnormalities within the context of the data. This is what is going on here.  Beyond this, in production, it's advisable to ensure your looking at the most specific time range possible to reduce the likelihood of data aggregation complexities.  e.g. looking at 24 hour data is less effective than looking at 5 minute aggregation levels.  The following two links may be beneficial too. Server Visibility: https://docs.appdynamics.com/appd/24.x/24.4/en/infrastructure-visibility/server-visibility Troubleshooting Applications: https://docs.appdynamics.com/appd/24.x/24.4/en/application-monitoring/troubleshooting-applications
Any followup on this?   I am seeing the same issue.