All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  Yes, this answered my question, but I have other doubt. Values command does not work if the data got merged into one row after "summary index".  Please see below example and pictur... See more...
Hi @gcusello  Yes, this answered my question, but I have other doubt. Values command does not work if the data got merged into one row after "summary index".  Please see below example and picture. Please suggest. Thanks company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2
Set up your selection so that the value of the selection includes the where command, except in the case of * <input type="dropdown" token="position_select" searchWhenChanged="true"> <label... See more...
Set up your selection so that the value of the selection includes the where command, except in the case of * <input type="dropdown" token="position_select" searchWhenChanged="true"> <label>Floored $position_select$</label> <choice value="">*</choice> <choice value="| where position=1">1.2</choice> <choice value="| where position=2">2.4</choice> </input> Then just use the token in your search index = index1 $position_select$
Since you are using count by Entity, you will get multiple counts, one for each unique Entity. Similarly, since you are using count by title, you will get multiple counts, one for each unique title. ... See more...
Since you are using count by Entity, you will get multiple counts, one for each unique Entity. Similarly, since you are using count by title, you will get multiple counts, one for each unique title. Which Entity count do you want to compare with which title count?
You can use the command addtotals totals for rows.   | makeresults | eval mon=10, tue=23, wen=12, thu=2, fri=15 | addtotals You get a new field with name Total with value 62  
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors oc... See more...
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors occur as follows. 1. <h1>500 Internal Server Error</h1><br/>Exception <br/> 2. : HttpErrorResponse <html><body><h1>500 Internal Server Error</h1><br/>Exception <br/></body></html> Http failure response for https://chaplinappdynamics.com/controller/restui/containerApp/mainNavConfig: 500 Internal Server Error I didn't click "Use local login", I clicked "Next". Can you tell me what the problem is? Thank you.
Hi @mukhan1, have you an API to extract the content of this file? If yes, you could develop a script that periodically extract the logs and writes them in a text file readable from Splunk or direct... See more...
Hi @mukhan1, have you an API to extract the content of this file? If yes, you could develop a script that periodically extract the logs and writes them in a text file readable from Splunk or directly in Splunk. Splunk developed a connector (e.g. for wineventlog) to extract not text files. Ciao. Giuseppe   Ciao. Giuseppe
Oh yes, I had forgotten the use of sum, in fact, it works perfectly.   Thank you very much for your help. Best regards, Rajaion
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match... See more...
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match position_select. So i tried something like that: index = index1 | eval position_search = floor($position_select$) | where position = position_search The problem is that you of course can't use * in floor. Another problem is that | where position = * is impossible too. However i cant use | search because | search position = position_search  does not work.   So the question is, is there any way to use something like floor()  on position_select?  
Thanks! This sounds like what I'm looking for. I might give this a try. The indexes in question already have several years of retention period configured. That should be taken care off. It's really j... See more...
Thanks! This sounds like what I'm looking for. I might give this a try. The indexes in question already have several years of retention period configured. That should be taken care off. It's really just summary indexes that we want to persist for "sentimental reasons"
From the query using stats count by entity getting A and then using stats count by title getting B then I want a difference  A-B count then what should I use.
Your requirement is unclear - if you have a count by "Entity A" for example, which total count do you want to use to find the difference, "Total X", "Total Y" or "Total Z", and vice versa
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run se... See more...
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run searches against out web and email logs.  I have the regex working for extraction but it will only extract the first match. I've tried multiple ways of achieving this without success, the current config is: Props.conf EXTRACT-IOCURL = (?P<IOCURL>[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}) EXTRACT-IOCIP = (?P<IOCIP>\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}+) The indexed email looks like this.... .... Domains comprised[.]site badsite[.]studio malware[.]live IP addresses 192[.]254[.]71[.]78 192[.]71[.]27[.]202 193[.]182[.]144[.]67  ....   but the current config will only extract the first record for each: IOCURL - comprised[.]site and  IOCIP  - 192[.]254[.]71[.]78. Any ideas how to extract all the domains and IP addresses? Thanks 
Try it like this (for pre-Splunk 9) index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name | append [| makeresults | eval name="Container ... See more...
Try it like this (for pre-Splunk 9) index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name | append [| makeresults | eval name="Container A,Container B,Container C,Container D" | eval name=split(name,",")] | stats count by name | where count < 2  
@gcusello  yes you're correct splunk can integrate any text file but my issue is that i have .log file namely as "F.JBASE.JED.AUDIT.LOG" this is the file name this file is not a text file
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_ni... See more...
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_nix_configuration" page, trying to access the "cim_setup" page we got a 404. We removed the app, rolled all members and re-installed. Once again it all seemed to work just fine. This time around we can access the cim_setup page, though if we try to access the "<baseurl>/app/Splunk_SA_CIM/" directly or use the "launch app" link in the GUI we land on the "<baseurl>/app/Splunk_SA_CIM/ta_nix_configuration" site. Is this somehow the expected behaviour or have we got some crossed wires somewhere?
I am afraid its on Version:8.2.11.2 That answers it.   Assume there is nothing else we can try on the version I am on please?   Best Regards  
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-... See more...
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by Entity Using | stats count by Entity and | stats count by title  I am getting two counts how can I find the difference between the Entity and title count
https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Usemaintenancemode That's one thing to read. Probably others might add their three cents on this. Furthermore, remember to raise your rete... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Usemaintenancemode That's one thing to read. Probably others might add their three cents on this. Furthermore, remember to raise your retention periods for the indexes. Otherwise when you power the cluster back on in some time, it might happily roll buckets to frozen because they will already be old enough.
The valid/working link is here - https://docs.splunk.com/Documentation/Splunk/latest/Viz/DashboardPDFs#Limitations_to_PDF_generation.
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to sav... See more...
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to save on infrastructure cost and hibernate the whole cluster. Only bring it up again when someone needs to search the old data, and hibernate the cluster again.   How would I do this best? My environment consists of search head cluster with 2 members and an indexer cluster with 6 members. My understanding is, as soons as I start to stop indexers, the cluster would try to rebalance the data in the remaining indexer nodes. That seems suboptimal since I need to stop all the instances eventually and don't want to end up with a single indexer node holding all the data.   Any ideas?