All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, In particular sourcetype we are getting huge numbers of events but only some data events are relevant. I am try to take only events with matching string and exclude everything else. Matching... See more...
Hello, In particular sourcetype we are getting huge numbers of events but only some data events are relevant. I am try to take only events with matching string and exclude everything else. Matching strings :  Session initialization | Session initialized   (There are few more as well)    I have tried this by refereing this post Link  When I am using this its excluding everyting and when i tried only with setparsing its injesting all data. Not sure what I am missing here. props.conf [mx_java_event] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true EXTRACT-JavaClass = ,\d+\s\[(?<JavaClass>[^:]*): EXTRACT-Session = session:(?<Session>\d+) TRANSFORMS-set = setnull, setparsing transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = Session initialization | Session initialized DEST_KEY = queue FORMAT = indexQueue FYI : In case REGEX is incorrect, I tried with "REGEX = Session"  its not working either.     Sample data: (Only 1st and 3rd line are matching) 2020-08-12 14:08:11,775 [Thread-233 - Worker-54] murex.processing.stp.osp.server.service.OspServer : DEBUG - [session:1758555252] Session initialization - SGITOPS/SG_LAW_MRC 2020-08-12 14:08:12,775 [Thread-233 - Worker-54] murex.processing.stp.osp.server.service.OspServer : DEBUG - [session:1758555252] Excluded - SGITOPS/SG_LAW_MRC 2020-08-12 14:08:11,912 [Thread-233 - Worker-54] murex.processing.stp.osp.server.service.OspServer : DEBUG - [session:1758555252] Session initialized 2020-08-12 14:08:12,912 [Thread-233 - Worker-54] murex.processing.stp.osp.server.service.OspServer : DEBUG - [session:1758555252] Session Excluded2 2020-08-12 14:08:12,912 JUST FOR Testing
I'm trying to get my dashboard to show a modal window with a tableview containing search assigned to the tableview.  The tableview search is not to show search with updated token value.        <r... See more...
I'm trying to get my dashboard to show a modal window with a tableview containing search assigned to the tableview.  The tableview search is not to show search with updated token value.        <row> <panel> <title>$show_ModalTable$</title> <chart> <search> <query>index=_internal sourcetype="splunkd" log_level IN ("ERROR", "WARN")|stats count by component</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> <drilldown> <set token="show_ModalTable">$row.component$</set> </drilldown> </chart> </panel> <panel> <html> <div class="modal hide fade" id="modaltable_div"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true"/> <h3>Modal header</h3> </div> <div id="modaltable_detail"/> <!--<div class="modal-footer"> <a href="#" class="btn">Close</a> <a href="#" class="btn btn-primary">Save changes</a> </div>--> </div> </html> </panel> </row> require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/searchcontrolsview', 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, SavedSearchManager, SearchControlsView, TableView, ChartView) { // Listen for token changesvar tokens = mvc.Components.get("default"); var tv= new TableView({ id: 'modal_table', data: "results" , managerid: 'modaltable_search', el: $('#modaltable_detail') }).render(); tokens.on("change:show_ModalTable", function(model, value, options) { $('#modaltable_div').modal(); //mvc.Components.revokeInstance("modal_table"); var sm = mvc.Components.getInstance("modaltable_search"); sm.startSearch(); console.log(sm.settings.get("search")) }); });              
HI All, am having trouble getting below table in monthly order. please help me in this.  Query :    index=moogsoft_e2e | bin span=1mon _time | eval month_Year = strftime(_time,"%b-%y") | char... See more...
HI All, am having trouble getting below table in monthly order. please help me in this.  Query :    index=moogsoft_e2e | bin span=1mon _time | eval month_Year = strftime(_time,"%b-%y") | chart count over Class_Type by month_Year     Output tabular format :  Class_Type Aug-20 Jul-20 Sep-20 NodeDown  2168 2249 2027       Please help in chronology order
This issue is primarily related to events ingested via the IMAP Mailbox App We are running a distributed environment with a HF, 3x indexer and 3x search head (accessed via a VIP). The install has b... See more...
This issue is primarily related to events ingested via the IMAP Mailbox App We are running a distributed environment with a HF, 3x indexer and 3x search head (accessed via a VIP). The install has been carried out as per the README.txt instructions for a distributed environment. Some events are only appearing when searched for on the HF.  They do not appear when searched for on the SH's. The results are mixed in that some email events do not appear at all on the SH's and some events may or may not appear.  That is a search on HF returns 11 events.  The same search on SH returns 8 events. As always, thanks very much for assistance.  
Enterprise Security has a nice Glass Table feature. I'm wondering if it is possible to include it within dashboard? Or export somehow - GT right now look more like "preview" than a regular tool.
so here is my new assignment: we have about 15 alerts that search for various key phrases in our Cisco network's syslogs. Some of them return hundreds or even thousands of results (usually in a "|sta... See more...
so here is my new assignment: we have about 15 alerts that search for various key phrases in our Cisco network's syslogs. Some of them return hundreds or even thousands of results (usually in a "|stats count" format). My boss wants me to create a dashboard with a time picker to see the daily number of returned results for each alerts in a stacked bar chart. So basically the X axis would be the date and the y-axis would be count with each alert represented by a different color. I am thinking i need to specify the search for each alert, then somehow assign it a variable to be specified in an "xyseries" command. I already have the date conversion from another dashboard:   convert timeformat="%m/%d" ctime(_time) AS date | stats count by "xyz",date | xyseries date, "xyz", count What i can't seem to get is each alert's search string as a separate item in the series. Here is an example alert: index=network key_word=*HWPORTMAN-*QUEUE OR key_word=LINECARDMGMTPROTOCOL-*WARNING | stats count by host
The Owner selection in Incident Review filters by the account "Full name", but the Investigations filter to add users to the investigation only displays and filters on the account name. I expect th... See more...
The Owner selection in Incident Review filters by the account "Full name", but the Investigations filter to add users to the investigation only displays and filters on the account name. I expect that all user lookups in Splunk ES should behave similarly, if not identically.  If only one field is available, I'd prefer the "Full name".  But filtering on both might be nice, if it isn't noisy and doesn't add too much to the backend. Version: Splunk ES on 7.3.3
Requirement is to send data from Splunk to PTA tool using Scheduled Search on Search Head. The Data should be filtered on some parameters and filtered data/events are sent to PTA in regular interval... See more...
Requirement is to send data from Splunk to PTA tool using Scheduled Search on Search Head. The Data should be filtered on some parameters and filtered data/events are sent to PTA in regular intervals. Like Every one hours the Events should be filtered and sent to PTA.  
I am searching IIS logs, trying to calculate the number of GB transferred each day for the last 7 days.  Here is my search: index=iis sourcetype=iis cs_user_agent="JTDI*" earliest=-7d@d | stats sum... See more...
I am searching IIS logs, trying to calculate the number of GB transferred each day for the last 7 days.  Here is my search: index=iis sourcetype=iis cs_user_agent="JTDI*" earliest=-7d@d | stats sum(cs_bytes) as UPLOADS, sum(sc_bytes) as DOWNLOADS by date_mday | eval UPLOADS=round(UPLOADS/1024/1024/1024,2) | eval DOWNLOADS=round(DOWNLOADS/1024/1024/1024,2) | rename date_mday as "Day of the Month"| sort -"Day of the Month" The problem I am having is that I get a different result for the 7th day if I use -7d@d vs -8d@d.  In both cases, every day should be the total for that day since midnight.  So when I search over 8 days, why does my 7th day have more data?
We setup 3 deployment servers behind a LB VIP.  We set the VIP in deploymentclient.conf but for those agents we are not seeing them check in.  We logged in locally to both a *nix and a Win machine an... See more...
We setup 3 deployment servers behind a LB VIP.  We set the VIP in deploymentclient.conf but for those agents we are not seeing them check in.  We logged in locally to both a *nix and a Win machine and we can telnet to the VIP on port 8089.  The LBer shows that traffic is being distributed between all three. Also, only our original DeploymentServer is showing clients, #2 and #3 do not show any clients at all.  All serverclass.conf match and we have serverChecksum set to true in global.  All of the app files are identical.  
I have an index that has the fields start date and end date. I need to find the difference between the two timestamps, convert it into days, and put in different duration buckets.  Following is an e... See more...
I have an index that has the fields start date and end date. I need to find the difference between the two timestamps, convert it into days, and put in different duration buckets.  Following is an example of the data: ID     START_DATE                                     END_DATE 1       1970-03-12 00:00:00.0                2020-06-17 00:00:00.0 2       2015-02-01 00:00:00.0                2020-01-02 00:00:00.0 and so on. My query looks like: index={something} | where START_DATE!="" AND END_DATE!="" | eval difftime=strptime(END_DATE,"%Y-%m-%d %H:%M:%S.%3N")-strptime(START_DATE,"%Y-%m-%d %H:%M:%S.%3N") | eval daydiff = round(difftime/86400) | eval Label=case( daydiff <= 30, "<=30 Days", daydiff > 30 AND daydiff <= 90, ">30 Days AND <= 90 Days", daydiff > 90 AND daydiff <= 365, ">90 Days AND <= 12 Months", daydiff > 365 AND daydiff <= 730, ">12 Months AND <= 24 Months", daydiff > 730 AND daydiff <= 1095, ">24 Months AND <= 36 Months", daydiff > 1095, ">36 Months") | stats count(ID) as Counts by Label | eval SortLabel = case(Label="<=30 Days",1,Label=">30 Days AND <= 90 Days",2,Label=">90 Days AND <= 12 Months",3,Label=">12 Months AND <= 24 Months",4,Label=">12 Months AND <= 24 Months",5,Label=">24 Months AND <= 36 Months",6) | sort SortLabel | table Label Counts   Problem: When the start date is in 1970, strptime isn't returning anything at all (which I think is a known issue), which is giving me wrong counts. A workaround which I thought was to add an if statement wherever I'm doing the conversion, and hardcode it to 0. But that won't work if, let's say, the start date and end date are both in 1970. In that case, both would be 0, and the count for the first label would increase whereas the count for the appropriate duration bucket should increase.  Is there a way to do this? Is there any other function to get the UNIX time Or, is there a better way to do this? Alternatively, can I find the difference between the two times directly somehow? 
Hi, we have a requirement to configure the ServiceNow Security Operations Add on to use a proxy url, proxy port and also a certificate for the proxy.   I can see there are option to specify proxy_... See more...
Hi, we have a requirement to configure the ServiceNow Security Operations Add on to use a proxy url, proxy port and also a certificate for the proxy.   I can see there are option to specify proxy_url & proxy_port. Is it also possible to also integrate the certificate of our proxy? Does it need to be done within the add on or within Splunk or at the OS/Linux level?   thanks
I am trying to write a search for getting the top two failed policy count for each cycledate. The below works for a single day but not for multiple cycledates. index=xxx host=yy* source="*E:\\logfil... See more...
I am trying to write a search for getting the top two failed policy count for each cycledate. The below works for a single day but not for multiple cycledates. index=xxx host=yy* source="*E:\\logfile\*" tag="*error*" "Error ==>*" | stats distinct_count(polnum) as FailedPolicy by error_message, err_code, cycledate | sort 2-FailedPolicy   Table without the sort 2 -FailedPolicy error_ message err_Code CycleDate FailedPolicy Err1 20167 09112020 35 Err2 23461 09112020 12 Err3 23451 09112020 22 Err4 1324 09112020 3 Err5 134155 09102020 21 Err6 3245 09102020 81 Err7 1234 09102020 2 Err8 4124 09092020 21 Err9 567 09092020 31 Err10 9873 09092020 45
We have a client using an on-prem non-https controller, currently we are attempting to add HTTPS URL's to Service Availability and have noticed that we get 404 errors. Do we have to create a keystore... See more...
We have a client using an on-prem non-https controller, currently we are attempting to add HTTPS URL's to Service Availability and have noticed that we get 404 errors. Do we have to create a keystore for the machine agent and import the relevant cert?
お世話になります。 集計のサーチ文の書き方についてご教示ください。 やりたいことは下記の通りです。 ・販売数で集計し、Top3を出力する。 ・その他は合計して集計する。 ・販売数で集計した結果に、商品名をキーとして割引販売数の集計値をマージする。 出力イメージは以下の通りです。 商品名 販売数 aaa 300 aaa 400 aaa 500 ... See more...
お世話になります。 集計のサーチ文の書き方についてご教示ください。 やりたいことは下記の通りです。 ・販売数で集計し、Top3を出力する。 ・その他は合計して集計する。 ・販売数で集計した結果に、商品名をキーとして割引販売数の集計値をマージする。 出力イメージは以下の通りです。 商品名 販売数 aaa 300 aaa 400 aaa 500 bbb 300 ccc 200 ccc 300 ccc 500 ddd 100 ddd 100 eee 800   商品名 割引販売数 aaa 100 aaa 50 aaa 200 bbb 200 ccc 10 ccc 200 ccc 100 ddd 20 ddd 50 eee 100   集計後イメージ 商品名 販売数 割引販売数 aaa 1200 350 ccc 1000 310 eee 800 100 その他 500 270   宜しくお願い致します。
Hello, I would like to copy the configuration of the Splunk DBconnect app to another server. I would like to get there: - connections and identities - my additional driver to the SAP HANA DB, whic... See more...
Hello, I would like to copy the configuration of the Splunk DBconnect app to another server. I would like to get there: - connections and identities - my additional driver to the SAP HANA DB, which I have installed on the first server I am planning to copy the following directories from the server 1 to 2: local, drivers and keystore. Is that sufficient? Or do I have to take care for some additional things? Kind Regards, Kamil
I am trying to make a Splunk index a zipped file that is generated every hour. I use the batch method in order to destroy the file once it has been dealt with however i do not want Splunk to read th... See more...
I am trying to make a Splunk index a zipped file that is generated every hour. I use the batch method in order to destroy the file once it has been dealt with however i do not want Splunk to read the contents of the file but rather just index the actual zipped information for archival purposes. Then if i require it in the future i can extract it at a later date.   I have looked into the props.conf (invalid_cause) method but it seems to extract the zipped file before indexing or not at all (errors).   Does anyone have experience or advice with this? 
Hi. I'm trying to create a kvstore using this command (from this tutorial https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/ ) curl -k -u admin:<mypa... See more...
Hi. I'm trying to create a kvstore using this command (from this tutorial https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/ ) curl -k -u admin:<mypassword> \ -d name=kvstorecoll \ https://<myhost>/servicesNS/nobody/<myapp>/storage/collections/config  The call goes through and I see a result in the console, however when I check in Splunk I do not see the kvstore I created. Is there anything special I need to be aware of here? Thanks!
Dear Splunkers, I need your help in filtering out the data which I am recieving before storing it into the indexer. Below is a sample data that I am recieving, and here I am intrested to keep data in... See more...
Dear Splunkers, I need your help in filtering out the data which I am recieving before storing it into the indexer. Below is a sample data that I am recieving, and here I am intrested to keep data in below tags and discard others. Intrested data: <name>MACHINE_HOSTNAME</name> and <ip_address>11.22.33.44</ip_address> Sample Data: <computer><general><id>1234</id><name>MACHINE_HOSTNAME</name><network_adapter_type>XXXXXX</network_adapter_type><mac_address>XX:XX:XX:XX:XX:XX</mac_address><alt_network_adapter_type>Ethernet</alt_network_adapter_type><alt_mac_address>XX:XX:XX:XX:XX:XX</alt_mac_address><ip_address>11.22.33.44</ip_address><last_reported_ip>12.34.56.78</last_reported_ip><serial_number>XXXXXXXXXX</serial_number><udid>XXXX-XXXX-XX-XX</udid><jamf_version>10.X.0-tXXXXXX</jamf_version><platform>Mac</platform><barcode_1 /><barcode_2 /><asset_tag /><remote_management><managed>true</managed> Regards, Abhi
Hi Team, I am trying to create script which genrate report of incident from BMC remedy but that is not accurate please give any hint. Thanks Prtri