All Posts

Top

All Posts

Yes absolutely , the new alerts or reports that I am creating is unable to get notified through emails...If you have any suggestion kindly help
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the m... See more...
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the manager server to receive data using manager-app? best regards
Here are a few questions asked during the live Tech Talk   Q. What else should I be doing to prepare for the Python upgrade? A.The Splunk Docs page https://docs.splunk.com/Documentation/Splunk/lat... See more...
Here are a few questions asked during the live Tech Talk   Q. What else should I be doing to prepare for the Python upgrade? A.The Splunk Docs page https://docs.splunk.com/Documentation/Splunk/latest/Python3Migration/AboutMigration is the latest and greatest resource for learning more about how to upgrade. We also recently released Splunk Python SDK 2.0.0 with added support for 3.9. That resource is here: https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/   Q. What would you recommend as a cadence of upgrade? We are tight on resources and need to plan in advance. A. I think it is worth noting that every customer will have different constraints and challenges, so there isn't a single right answer to this question. However, the general guidance here would be to think about a few dimensions: (1) It's important to stay on a supported Splunk version, and as the presentation highlighted, there is a 2 year support window after each release comes out. It is therefore prudent to be running on a version that gives sufficient support window prior to your next upgrade. (2) Moving to a new version likely involves some amount of qualification in your environment. How long that qualification may take, along with your organizations operations and critical periods (eg likely you wont want to do upgrades during a peak season (eg Tax Season, or Black Friday / Winter Holidays). Often then, we see customers planning to upgrade to a new major/minor version right after their busy season(s). Our goal has been to make this upgrade as easy as possible by providing the ability to upgrade to a new version without having to sequentially upgrade through all intermediate versions. Do make sure to check the release notes / upgrade guide (for example https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/HowtoupgradeSplunk) for more details on this as this may vary for specific relases. The major/minor version will include features as well as bug fixes and security fixes. The length of time your organization needs to qualify a release will vary based on your own specific system complexity and organization processes. It is however recomended to upgrade to a new major/minor version at least once per year. (3) Staying current with regard to security vulnerabilities is likely an important aspect for you to consider. Many enterprises have specific policy around security vulnerabilities and the time in which they need to be mitigated/remediated. With this in mind, our maintenance releases, which come out roughly every 8 weeks provide updates for issues published in our security advisories (see https://advisory.splunk.com/). The maintenance releases do not contain features and are intended to be much easier to upgrade to, given the surface area of changes is much smaller and thus should require less qualification prior to deployment. It is therefore recommended to stay current on the latest maintenance version for your given version. In summary then, your specific circumstances will likely differ, but it is recomended to upgrade to the latest major.minor version at least once per year, and then to stay current with the maintenance versions for that major/minor release on an ongoing basis.   Q. We are currently on 9.0. Can we upgrade to 9.2 directly, or do we need to upgrade to 9.1 and then 9.2? A.Yes, that is a supported version. You can always find information on what the supported ugrade paths are in Splunk Documentation. For Splunk Enterprise 9.2, what is located here https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/HowtoupgradeSplunk   Q. Are you planning to discontinue "classic Dashboard"? A. As mentioned in the Webinar, the future vision is for Dashboard Studio to become the defacto standard and tool for creating Splunk dashboards. We know many customers are currently on Classic, and are working to get as much feature parity as possible so that customers can migrate.   Q. Will we get to see any more detail about how the certificate store integration works? A. You can find information about Trust Store integration in our Splunk Documentation here: https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/ConfigTLSCertsS2S   Q. Are there any real changes expected to apps when we go up to 3.9 from 3.7? A. In our internal testing across the platform and Splunk-supported apps, we have not seen significant breakage or compatibility issues. We expect customers to be able to migrate to Python 3.9 relatively easily, but we do recommend customers testing it before deploying into Production. The version we are targeting after Python 3.9 is expected to be Python 3.12 - which we know will be a major effort. We continue to publish more information on Splunk Docs as it becomes available.   Q. Will Splunk be able to use the Windows Certificate Store and how do we select which certificate will Splunk use? A. You can find information about Trust Store integration in our Splunk Documentation here: https://docs.splunk.com/Documentation/Splunk/9.2.0/Security/ConfigTLSCertsS2S   Q. Are the Splunk Containers going to be released on the same cadence as the main product? We have seen many more CVEs in the containers than in Splunk. A. Yes, the docker-splunk team maintains the same cadence as Splunk. The Splunk Operator for Kubernetes, which utilizes Docker-Splunk for deployment in Kubernetes environments, also strives to adhere to this cadence. However, occasionally, it may require more time to align with the desired schedule.   Q. What versions of Splunk can we upgrade directly from to 9.2.1? A. You can always find information on what the supported ugrade paths are in Splunk Documentation. For Splunk Enterprise 9.2, what is located here https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/HowtoupgradeSplunk   Q. Aren't the DSs using DFS shares for the apps? A. For our internal testing, we have tested with NFS and can confirm it works. It should extend and work using DFS as well.   Q. Since it can be put behind a load balancer, does replication occur between Deployment servers?   A. All of the DSs are mounted with the same network driver folder for phonehome, client events and apps files. Since the folders are pointing to the same network driver files for all DSs, they can be share each other’s info.  
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_se... See more...
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_server=splunk-index-test-01* ) )
Apparently this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement.
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  ... See more...
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  trigger  saperatly title wise .
@asakhaYou have to adjust your correlation search as per your fields.This is just a reference. Alert when end-users has logged onto the VPN entry point more than 5 times in a day. index=<indexname>... See more...
@asakhaYou have to adjust your correlation search as per your fields.This is just a reference. Alert when end-users has logged onto the VPN entry point more than 5 times in a day. index=<indexname> sourcetype=<sourcetypename> status=success | stats count by user, _time | bin _time as day | where count > 5 | table user, day, count A fail-to-ban feature of IP address if their login fails more than 3times in 1hr. index=<indexname> sourcetype=<sourcetypename> action=failure | stats count as failed_login_count by src_ip, _time span=1h | where failed_login_count > 3 | table src_ip, _time, failed_login_count | eval ban_message="IP address " . src_ip . " exceeded failed login attempts (" . failed_login_count . ")." Weekly Report of End-Users’ IP Addresses Attempting VPN Logins index=vpn_logs sourcetype="your_vpn_sourcetype" | stats count as login_count by user, src_ip, _time span=1w | table user, src_ip, _time, login_count    
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/l... See more...
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/latest/Security/Aboutusersandroles
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field ... See more...
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field based on match.   | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null())   The sample data should give you something like level request_elapsed request_id request_method request_path response_status route info 100 2ca011b5-ad34-4f32-a95c-78e8b5b1a270 GET /orders/123456 500 /order/{orderID} Is this what you wanted? Here is a data emulation you can play with and compare with real data.   | makeresults | eval _raw = "level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500" | extract ``` data emulation above ```   Of course, if for unknown reasons Splunk doesn't give you request_path, simply add an extract command and skip all the rex which is expensive.
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role... See more...
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role based cloudtrail logging rather than aws account. Please put this on your bug litst Splunk.
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvap... See more...
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | foreach *_ip [eval <<FIELD>> = split(<<FIELD>>, "."), <<FIELD>>_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(<<FIELD>>, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(<<FIELD>>, 3))), <<FIELD>> = mvjoin(<<FIELD>>, ".") ``` this last part for display only ```] | fields - offset segment_rev   The sample data gives dst_ip dst_ip_dec src_ip src_ip_dec 192.168.1.100 3232235876 192.168.1.1 3232235777 Here is an emulation you can play with and compare with real data     | makeresults format=csv data="src_ip, dst_ip 192.168.1.1, 192.168.1.100" ``` data emulation above ```     Note: If it helps readability., you can skip foreach and spell the two operations separately.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval src_ip = split(src_ip, ".") | eval dst_ip = split(dst_ip, ".") | eval src_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(src_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(src_ip, 3))) | eval dst_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(dst_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(dst_ip, 3))) | eval src_ip = mvjoin(src_ip, "."), dst_ip = mvjoin(dst_ip, ".") ``` for display only ``` | fields - offset segment_rev        
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times... See more...
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times, can cause abnormalities within the context of the data. This is what is going on here.  Beyond this, in production, it's advisable to ensure your looking at the most specific time range possible to reduce the likelihood of data aggregation complexities.  e.g. looking at 24 hour data is less effective than looking at 5 minute aggregation levels.  The following two links may be beneficial too. Server Visibility: https://docs.appdynamics.com/appd/24.x/24.4/en/infrastructure-visibility/server-visibility Troubleshooting Applications: https://docs.appdynamics.com/appd/24.x/24.4/en/application-monitoring/troubleshooting-applications
Any followup on this?   I am seeing the same issue.
Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" butt... See more...
Hi @PoojaChand02 , It seems the screenshots were from different Splunk platforms. The first one is Splunk Enterprise but second one is  Splunk Cloud.  Splunk Cloud does not have "Data Summary" button. You can see similar data summary using below query for host data .(You can use other types like  "hosts", "sources" or "sourcetypes". Please do not forget to replace rename command accordingly. You can also see other indexes than main. | metadata index=main type=hosts | eval lastSeen = strftime(lastTime, "%x %l:%M:%S %p") | rename host AS Host, totalCount AS Count, lastSeen AS "Last Update" | table Host, Count, "Last Update"  
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token... See more...
Have you added the dropdown - what is the problem you are facing? Simply add the dropdown, set the 8 static options and then in your search use index=bla host=*$my_host_token$* where my_host_token is the token for your dropdown Assuming the table below is the finite list of hosts you will have, then this should work - there are of course other ways to do this, but this is the simplest.  
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use r... See more...
Thanks @scelikok. No I don't just want the orderID. But I want to manually create the RESTful API routing pattern. for "path=/order/123456",  "route=/order/{orderID}", basically I am trying to use regex to replace the value and create a new field in this way: if value matches \/order\/\d{12}, then convert to /order/{orderID} I have other examples like: path=/user/jason@sample.com/orders route=/user/{userID}/orders    
https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/Aboutthesearchapp
The data summary option does not exist in Splunk Cloud
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetr... See more...
Thanks, I will check out your advice in a bit. Yes, I agree that the data structure is not ideal for parsing. Unfortunately, this is output from an OpenTelemetry collector following the OpenTelemetry standard (which Spunk also embraces, though we don’t have native parsing for it yet in Splunk Enterprise), so if this takes off as the cross-vendor standard for pushing telemetry, then we are going to have to deal with ingesting in this format more and more. Or maybe it is an opportunity to suggest formatting changes to CNCF to the standard.