All Topics

Top

All Topics

Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application s... See more...
Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application server starts up, we get Connection Refused error messages in the java agent logs. Stack Trace Error Message: org.apache.http.conn.HttpHostConnectException: Connect to server.domain.name:443 [server.domain.name/<IP Address>] failed: Connection refused (Connection refused)         at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5.13] Configuration information:       -Dappdynamics.controller.hostName=server.domain.name \       -Dappdynamics.controller.port=443 \       -Dappdynamics.agent.applicationName=APP-NAME \       -Dappdynamics.agent.tierName=Services \       -Dappdynamics.agent.nodeName=APP1_Node1 \       -Dappdynamics.agent.accountName=customer1 \       -Dappdynamics.bciengine.should.implement.new.interfaces=false \       -Dappdynamics.agent.accountAccessKey=<account number>\ What kind of things could cause a connection refused with the Java agent running in Tomcat? What should I be looking for in the logs? Is there a known difference in connection behaviour between Websphere and Tomcat? Any help would be appreciated. Thanks! Dale Chapman
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I h... See more...
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I have used. index=testIndex sourcetype=testSourceType basicQuery | where executionTime>1000 | stats count by app_name, executionTime After running the query by setting it for "Last 5 minutes" in the dropdown beside search icon, I am getting results. Then I have saved the query as an Alert  with time range set to "last 5 minutes" and Cron Expression set to "*/1 * * * *" to run it for every 1 minute in the last 5 minutes.  Is this a correct approach ? Main point is: I don't want to miss any events with response time more than 1000msec.    Also, what is the difference between setting time in dropdown and  earliest=-5m latest=now ?  Can someone please help me ?    Thanks in Advance. 
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sour... See more...
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sourceIPAddress requestParameters.instancesSet.items{}.instanceId | rename requestParameters.instancesSet.items{}.instanceId as id Query 2 index=aws sourcetype=aws:description source="us-east-2:ec2_instances" | table id private_ip_address   I would like the final table fields to be: time  userName  sourceIPAddress    id   private_ip_address   Any assistance given will be appreciated.
Hello, We are going to have several OpenShift 4 clusters  running CoreOS, therefore without having the possibility to install a standard version of the Splunk Universal Forwarder. Do you think is p... See more...
Hello, We are going to have several OpenShift 4 clusters  running CoreOS, therefore without having the possibility to install a standard version of the Splunk Universal Forwarder. Do you think is possible, installing a containerized version of the Splunk Universal Forwarder on each OpenShift node, to let it connect to a Splunk Deployment server (not containerized) in order to control them centrally? Thanks a lot, Edoardo
I'm trying to display a total count for each value found in attributes.eventtype field and group them by the attributes.campaignname field. I'm display these stats from 2 specified values in attribut... See more...
I'm trying to display a total count for each value found in attributes.eventtype field and group them by the attributes.campaignname field. I'm display these stats from 2 specified values in attributes.campaignname:   index=mail sourcetype="phish-campaign-logs" attributes.campaignname="Undelivered Phishing Campaign - FY21Q2 - 062421" OR attributes.campaignname="O365 Re-authentication - FY21Q3" | spath output=eventtype attributes.eventtype | dedup id | stats count(eval(eventtype="Data Submission")) AS Data_Submission, count(eval(eventtype="Email Click")) AS Email_Click, count(eval(eventtype="Email View")) AS Email_View, , count(eval(eventtype="No Action")) AS No_Action, count(eval(eventtype="TM Complete")) AS TM_Complete, count(eval(eventtype="TM Sent")) AS TM_Sent BY attributes.campaignname | addtotals   When running the search, I'm receiving smaller counts on each of the values in eventtype for one of the campaigns specified, "Undelivered Phishing Campaign - FY21Q2 - 062421"   If I only specify this campaign in my search, I'm able to receive back the expected total count on each of the values:      index=mail sourcetype="phish-campaign-logs" attributes.campaignname="Undelivered Phishing Campaign - FY21Q2 - 062421" | spath output=eventtype attributes.eventtype | dedup id | stats count(eval(eventtype="Data Submission")) AS Data_Submission, count(eval(eventtype="Email Click")) AS Email_Click, count(eval(eventtype="Email View")) AS Email_View, , count(eval(eventtype="No Action")) AS No_Action, count(eval(eventtype="TM Complete")) AS TM_Complete, count(eval(eventtype="TM Sent")) AS TM_Sent BY attributes.campaignname | addtotals   Please help me to make this search working properly. Thank you in advance.
  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my qu... See more...
  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my query: index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message>.*?)," | table message   My dashboard: <panel> <single> <title>Meu titulo</title> <search> <query>index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message/>.*?)," | table message </query> </search> <option name="height">96</option> </single> </panel>  I believe the error is due to <message>, but I'm new to splunk  
How to integrate / connect macro to data model or CIM data model 
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=ke... See more...
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=key, raiseAllErrors=True)   thank y'all
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has co... See more...
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has connectivity to indexers is sending internal logs and other application related to indexers.  How do I check what is wrong with Checkpoint logs ?
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthes... See more...
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthesis.    The query is autogenerated, so I'm not sure if this is due to a misconfiguration on my part, or perhaps just a unwanted feature.   (index=main source=WinEventLog:Security) ) OR (index=main source=WinEventLog:Security ) | head 10000 | eval SSELENGTH = len(_raw) | eventstats range(_time) as SSETIMERANGE | fields SSELENGTH SSETIMERANGE tag | fieldsummary
Hi there!  With new versions of Splunk’s Machine Learning Toolkit (MLTK) v5.3 and Python for Scientific Add-on (PSC) v3.0, we have some important and exciting news to share for all current & futu... See more...
Hi there!  With new versions of Splunk’s Machine Learning Toolkit (MLTK) v5.3 and Python for Scientific Add-on (PSC) v3.0, we have some important and exciting news to share for all current & future users of MLTK. It is first important to note that due to several breaking changes in the underlying Python libraries MLTK v5.3 and PSC 3.0 are not backwards compatible with previous versions of MLTK or PSC. - Questions see below! What does this mean for me? If you are actively using MLTK and PSC you might find that models trained using your current MLTK version do not work with the new MLTK release. What do I need to do? On upgrading to MLTK 5.3 and PSC 3.0 you will need to retrain your models. To do this you need to re-run the Splunk search that originally generated the model, i.e. to run the Splunk search that contains the fit command into your model. What if I have models that are trained using partial fit? If you have models that utilize partial_fit it is recommended that you delete your existing model and retrain a new model using as much historic data as deemed necessary (the amount of data needed depends heavily on the seasonal nature of the data source used to generate the model). It is recommended to periodically replace partial_fit models anyway to ensure you are not making decisions from models that are biased toward historic data. Will I need to change any of my searches? No, all existing searches that utilize MLTK search commands will still operate as expected after the upgrade.
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically des... See more...
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically designed this way for collecting event which have monotonicaly "growing" time. But what would happen if my source (regardless of the reason) generated events with a "random" timestamp? One could be from an old past (several years, maybe?), another from the future and so on. Would it mean that I'd have a chance to roll the buckets after just one or two events because I'd have sufficiently old events or sufficiently  big timespan in case of hot buckets?
Why doesn't threathunting index receive mapped data from sysmon (windows index)? By the way, I edited  the macro's to suit my environment but it still didn't work.
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more ser... See more...
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more server has no CLOSESESSION it means something going wrong. here is the spl: index="my_index" | rex field=source "(?<servername>\w+)." | rex "CLOSESESSION\:\s+(?<CLOSESESSION>\w+)" | table _time servername CLOSESESSION   Expected output: Servername     cause Server10           NOCLOSESESSION Server15            NOCLOSESESSION   any idea? Thanks,
Hi Fellows,   i try to have some statistics about AD user which their AD account will be expired in 7 days. I need help because  my request doesn't work as expected; I got the list of all user ac... See more...
Hi Fellows,   i try to have some statistics about AD user which their AD account will be expired in 7 days. I need help because  my request doesn't work as expected; I got the list of all user account. Do i need ldsearch instead of EventCode=4738 to get all users? the list displyed in only 1 month after, something like the relativetie function doesnt work correctly. index=* EventCode=4738 Account_Expires!="-" | eval is_interesting=strptime(Account_Expires,"%m/%d/%Y") | where is_interesting < relative_time(now(),"+7d@d") | table user status Account_Expires is_interesting  
  <?xml version="1.0" standalone="yes" ?> <SymCLI_ML> <Symmetrix> <Symm_Info> <symid>000197000225</symid> </Symm_Info> <Disk_Group> <Disk_Group_Info> <disk_group_nu... See more...
  <?xml version="1.0" standalone="yes" ?> <SymCLI_ML> <Symmetrix> <Symm_Info> <symid>000197000225</symid> </Symm_Info> <Disk_Group> <Disk_Group_Info> <disk_group_number>1</disk_group_number> <disk_group_name>GRP_1_3840_EFD_7R5</disk_group_name> <disk_location>Internal</disk_location> <disks_selected>17</disks_selected> <technology>EFD</technology> <speed>0</speed> <form_factor>N/A</form_factor> <hyper_size_megabytes>56940</hyper_size_megabytes> <hyper_size_gigabytes>55.6</hyper_size_gigabytes> <hyper_size_terabytes>0.05</hyper_size_terabytes> <max_hypers_per_disk>64</max_hypers_per_disk> <disk_size_megabytes>3644152</disk_size_megabytes> <disk_size_gigabytes>3558.7</disk_size_gigabytes> <disk_size_terabytes>3.48</disk_size_terabytes> <rated_disk_size_gigabytes>3840</rated_disk_size_gigabytes> <rated_disk_size_terabytes>3.75</rated_disk_size_terabytes> </Disk_Group_Info> <Disk_Group_Totals> <units>gigabytes</units> <total>60498.6</total> <free>0.0</free> <actual>60498.8</actual> </Disk_Group_Totals> </Disk_Group> <Disk_Group_Summary_Totals> <units>gigabytes</units> <total>60498.6</total> <free>0.0</free> <actual>60498.8</actual> </Disk_Group_Summary_Totals> </Symmetrix> </SymCLI_ML>     Have been trying to get this data sorted but unable to. need your kind help
Hi all, tabled results from a scheduled search are sent via email as a csv attached. Some rows could be very long so in some cases, when I open that csv file with Excel, I find some "split rows", I ... See more...
Hi all, tabled results from a scheduled search are sent via email as a csv attached. Some rows could be very long so in some cases, when I open that csv file with Excel, I find some "split rows", I would expect one unique line per row but instead sometimes I have half line positioned in the second column (as in the screenshot below). I'd like to obtain only one entire line per row,  so every event only in the first column of the Excel. The source search finds some events and tables some fields as result. Thanks in advance for any hint.
I have the following results returned by a search query: _time                                                        Id1                          Id2 2021-10-13 08:20:22.219     ABC471_1       845... See more...
I have the following results returned by a search query: _time                                                        Id1                          Id2 2021-10-13 08:20:22.219     ABC471_1       8456 2021-10-13 08:20:21.711     ABC471_8       8463 2021-10-13 08:20:16.112     ABC471_3       8458 However, I only receive an alert notification for the first result. My alert configuration is set up as follows: Settings Alert type                     Scheduled Time Range                Today Cron Expression      */5**** Expires                           24 hours Trigger Conditions Number of Results              >0 Trigger                                         For each result Throttle                                       Ticked Suppress results containing field value       Id2=$result.Id2$ Suppress triggering for   24 hours Trigger Actions Add to Triggered Alerts Send email I am expecting 3 emails to be generated for each of my search query results given that I am suppressing on Id2 which is different in each case.  However, I am just receiving the one alert as stated above. Can anyone advise me what I am dong wrong in this case? Thanks
Hi, I deployed Splunk distributed topology. Now my server Search Head has issue: KVStore is on failed state (it make app "Enterperise Security" failed too). I checked "/opt/splunk/var/log/splunk/sp... See more...
Hi, I deployed Splunk distributed topology. Now my server Search Head has issue: KVStore is on failed state (it make app "Enterperise Security" failed too). I checked "/opt/splunk/var/log/splunk/splunkd.log" and found the below logs: ========================================== 10-13-2021 18:14:03.127 +0700 ERROR DataModelObject - Failed to parse baseSearch. err=Error in 'inputlookup' command: External command based lookup 'correlationsearches_lookup' is not available because KV Store initialization has failed. Contact your system administrator., object=Correlation_Search_Lookups, baseSearch=| inputlookup append=T correlationsearches_lookup | eval source=_key | eval lookup="correlationsearches_lookup" | append [| `notable_owners`] | fillnull value="notable_owners_lookup" lookup | append [| `reviewstatuses`] | fillnull value="reviewstatuses_lookup" lookup | append [| `security_domains`] | fillnull value="security_domain_lookup" lookup | append [| `urgency`] | fillnull value="urgency_lookup" lookup 10-13-2021 18:14:30.350 +0700 ERROR KVStorageProvider - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling ismaster on '127.0.0.1:8191'] 10-13-2021 18:14:30.350 +0700 ERROR KVStoreAdminHandler - An error occurred. =============================== Could anyone help me to troubleshoot this issue to solve it? Thanks so much!  
Dear Splunk Community, I have the following statistics table and corresponding column chart that show the amount of errors per server: And Right now I am changing the colors on the column ... See more...
Dear Splunk Community, I have the following statistics table and corresponding column chart that show the amount of errors per server: And Right now I am changing the colors on the column chart based on a static value like so: index="myIndex" host="myHostOne*" OR host="myHostTwo*" source="mySource" ERROR NOT WARN CTJT* | table host, errors | eval errors = host | stats count by host | eval redCount = if(count>50,count,0) | eval yellowCount = if(count<=50 AND count>25,count,0) | eval greenCount = if(count <=25,count,0) | fields - count | dedup host | rex field=host mode=sed "s/\..*$//" | sort host asc | rename host AS "Servers" | rename count AS "Foutmeldingen" The above uses a time range of "last 24 hours". I would like to change the colors of the bars (green, yellow, red) when a certain percentage of errors has been reached, based on the average of last week. To summarize, I would like to: Somehow get the average amount of errors per server per day from the last 7 days Then specify a percentage for each color (e.g if the amount of errors today are 25% more than the average of last week, make the bar red) I have no idea on how to do this, can anyone help? Thanks in advance.