All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming this is supposed to be good JSON (which it isn't) and that you had missed a field name on the last object in the collection, you could try this. | spath ``` Fix up message to make a valid J... See more...
Assuming this is supposed to be good JSON (which it isn't) and that you had missed a field name on the last object in the collection, you could try this. | spath ``` Fix up message to make a valid JSON field ``` | eval message="{\"message\":".message."}" ``` Get the collection from message ``` | spath input=message message{} output=collection ``` Expand the collection into separate events ``` | mvexpand collection ``` Extract the fields ``` | spath input=collection ``` Assume you want the totals by ARUNAME ``` | stats sum(TOTAL) as Total, sum(PROCESSED) as Processed sum(REMAINING) as Remaining sum(ERROR) as Error sum(SKIPPED) as Skipped by ARUNAME For the first view, you would remove the by clause from the stats command
Suggestion is "don't use map". Map is an expensive, resource intensive, and slow command. Other ways to achieve this might be index=hello sourcetype=welcome | eventstats max(DATETIME) as LatestTime ... See more...
Suggestion is "don't use map". Map is an expensive, resource intensive, and slow command. Other ways to achieve this might be index=hello sourcetype=welcome | eventstats max(DATETIME) as LatestTime | where DATETIME=LatestTime | stats sum(HOUSE_TRADE_COUNT) as HOUSE_Trade_Count
Hi @slider8p2023 , good for you, see next time! I still don't use Dashboard Studio because it doesn't still have all the features I use of the Classical Dashboard! Ciao and happy splunking Giusep... See more...
Hi @slider8p2023 , good for you, see next time! I still don't use Dashboard Studio because it doesn't still have all the features I use of the Classical Dashboard! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks @gcusello that seemed to work. I cloned the original dashboard panel by panel and saved it as a NON Dashboard studio dashboard. The schedule to export as PDF. I was un-aware the scheduling of... See more...
Thanks @gcusello that seemed to work. I cloned the original dashboard panel by panel and saved it as a NON Dashboard studio dashboard. The schedule to export as PDF. I was un-aware the scheduling of PDF exporting is not available in using Dashboard Studio.   
Also , i have the following error which is generated for only one previous alert , if you could please look and see what other steps I can take , if that helps   2024-04-18 05:18:47,938 +0000 ERROR... See more...
Also , i have the following error which is generated for only one previous alert , if you could please look and see what other steps I can take , if that helps   2024-04-18 05:18:47,938 +0000 ERROR sendemail:187 - Sending email. subject="Splunk Alert: ITSEC_Backup_Change_Alert", encoded_subject="Splunk Alert: ITSEC_Backup_Change_Alert", results_link="*****", recipients="['it-security@durr.com']", server="********" @marnall 
Thanks Yuanliu for your quick reply. Yes, I need % sign included. In the email body I need to color the data of percentage column like below: @yuanliu 
Hi @masakazu, in the deploymentclient.conf file of the Cluster Master, you have to add: repositoryLocation = $SPLUNK_HOME/etc/managed-apps in this way the deployment Server, only on the Cluster Ma... See more...
Hi @masakazu, in the deploymentclient.conf file of the Cluster Master, you have to add: repositoryLocation = $SPLUNK_HOME/etc/managed-apps in this way the deployment Server, only on the Cluster Master, doesn't deploy the apps into apps but in managed-apps folder. Then you have to push it by GUI (I'm not sure that's possible to automate it by DS). Put attention that this deploymentclient.conf must be different thant the one on the other clients. Ciao. Giuseppe
Hi @slider8p2023, you could try to clone it going in https://<your_host>/en-US/app/SplunkEnterpriseSecuritySuite/dashboards and cloning the dashboard, but I'm not sure that it's possible to schedule... See more...
Hi @slider8p2023, you could try to clone it going in https://<your_host>/en-US/app/SplunkEnterpriseSecuritySuite/dashboards and cloning the dashboard, but I'm not sure that it's possible to schedule it. Otherwise, you should create a custom clone of the Security Posture dashboard using the searches that you can extract from the original dashboard and then schedule it to send by eMail as a pdf. Ciao. Giuseppe
Yes absolutely , the new alerts or reports that I am creating is unable to get notified through emails...If you have any suggestion kindly help
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the m... See more...
Grazie Signor Gcusello! 1. Create an APP with index.con in DS 2. After deploying to the manager, received by manager-app 3. peer received on peer-app. Also, is it standard specification for the manager server to receive data using manager-app? best regards
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_se... See more...
Hi, super late to the thread but, isn't it attibuted to the search filter restrictions to the role of the user? 10-13-2017 14:04:14.725 INFO SearchProcessor - Final search filter= ( ( splunk_server=splunk-index-test-01* ) )
Apparently this setting was not enabled on our deployer hence the ES upgrade still proceeded without it's enablement.
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  ... See more...
Thank you for your response. I have already tried this.  In this search I am getting multiple srcip and multiple dstip In one row. I required one row for one srcip to one dstip but alert should be  trigger  saperatly title wise .
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/l... See more...
@sumarri    Kindly check the below documents for reference https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/SavingandsharingjobsinSplunkWeb https://docs.splunk.com/Documentation/Splunk/latest/Security/Aboutusersandroles
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES ... See more...
Hi, I am trying to create a daily alert to email the contents of the Security Posture dashboard to a recipient. Can someone please share how I can turn the content of this Dashboard from Splunk ES into a search within an ALert so it can be added to an email and be sent out daily? Thanks
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field ... See more...
I do not see why you needed to do that extra extraction because Splunk should have given you a field named "request_path" already. (See emulation below.)  All you need to do is to assign a new field based on match.   | eval route = if(match(request_path, "^/orders/\d+"), "/order/{orderID}", null())   The sample data should give you something like level request_elapsed request_id request_method request_path response_status route info 100 2ca011b5-ad34-4f32-a95c-78e8b5b1a270 GET /orders/123456 500 /order/{orderID} Is this what you wanted? Here is a data emulation you can play with and compare with real data.   | makeresults | eval _raw = "level=info request.elapsed=100 request.method=GET request.path=/orders/123456 request_id=2ca011b5-ad34-4f32-a95c-78e8b5b1a270 response.status=500" | extract ``` data emulation above ```   Of course, if for unknown reasons Splunk doesn't give you request_path, simply add an extract command and skip all the rex which is expensive.
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role... See more...
I tried the same thing and found the same issue. I think the blacklist config is only compatible with the cloudtrail input NOT the sqs_based_s3 input. Really unfortunate as I wanted to switch to role based cloudtrail logging rather than aws account. Please put this on your bug litst Splunk.
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvap... See more...
Let me give this a semantic makeover using bit_shift_left (9.2 and above - thanks @jason_hotchkiss for noticing) because semantic code is easier to understand and maintain.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | foreach *_ip [eval <<FIELD>> = split(<<FIELD>>, "."), <<FIELD>>_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(<<FIELD>>, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(<<FIELD>>, 3))), <<FIELD>> = mvjoin(<<FIELD>>, ".") ``` this last part for display only ```] | fields - offset segment_rev   The sample data gives dst_ip dst_ip_dec src_ip src_ip_dec 192.168.1.100 3232235876 192.168.1.1 3232235777 Here is an emulation you can play with and compare with real data     | makeresults format=csv data="src_ip, dst_ip 192.168.1.1, 192.168.1.100" ``` data emulation above ```     Note: If it helps readability., you can skip foreach and spell the two operations separately.   | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval src_ip = split(src_ip, ".") | eval dst_ip = split(dst_ip, ".") | eval src_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(src_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(src_ip, 3))) | eval dst_ip_dec = sum(mvmap(segment_rev, bit_shift_left(tonumber(mvindex(dst_ip, segment_rev)), tonumber(mvindex(offset, segment_rev)))), tonumber(mvindex(dst_ip, 3))) | eval src_ip = mvjoin(src_ip, "."), dst_ip = mvjoin(dst_ip, ".") ``` for display only ``` | fields - offset segment_rev        
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times... See more...
Hi Gustavo, Excellent question and I can appreciate the interest in the discrepancies. With demo systems, we're generally not dealing with live data. The way it's generated can vary and, at times, can cause abnormalities within the context of the data. This is what is going on here.  Beyond this, in production, it's advisable to ensure your looking at the most specific time range possible to reduce the likelihood of data aggregation complexities.  e.g. looking at 24 hour data is less effective than looking at 5 minute aggregation levels.  The following two links may be beneficial too. Server Visibility: https://docs.appdynamics.com/appd/24.x/24.4/en/infrastructure-visibility/server-visibility Troubleshooting Applications: https://docs.appdynamics.com/appd/24.x/24.4/en/application-monitoring/troubleshooting-applications
Any followup on this?   I am seeing the same issue.