All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You are right - I misunderstood what you were trying to do - try this | eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | eval received=if(row=1,AMOUNT,null()) | eval accou... See more...
You are right - I misunderstood what you were trying to do - try this | eval row=mvrange(0,2) | mvexpand row | eval sent=if(row=0,AMOUNT,null()) | eval received=if(row=1,AMOUNT,null()) | eval account=if(row=0,ACCOUNT_FROM,ACCOUNT_TO) | eventstats sum(sent) as total_sent sum(received) as total_received by account | fillnull value=0 total_sent total_received | where total_sent > total_received
There is some confusion. We do not talk about  "Splunk Add-on for Cisco ESA".  I have asked for MIME Decoder Add-on for Cisco ESA  Compatibility This is compatibility for the latest version Spl... See more...
There is some confusion. We do not talk about  "Splunk Add-on for Cisco ESA".  I have asked for MIME Decoder Add-on for Cisco ESA  Compatibility This is compatibility for the latest version Splunk Enterprise Platform Version: 9.2, 9.1, 9.0, 8.2, 8.1, 8.0
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes... See more...
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes. Metric events that exceed 150 bytes are recorded as only 150 bytes. Metric events less than 150 bytes are recorded as event size in bytes plus 18 bytes, up to a maximum of 150 bytes. Metrics data draws from the same license quota as event data.    I'm wondering how splunk handles multi-metrics with the dimensions and tags.  Here an example:   { Tag1: Cross-Direction (CD) Type: CSV Unit: LS77100 Groupe: Traverse metric_name: LS77100.Traverse.Y1: 1.15 metric_name: LS77100.Traverse.Y2: 2.13 metric_name: LS77100.Traverse.Y3: 2.14 metric_name: LS77100.Traverse.Y4: 1.16 }    So what is count here as a Byte? So do I have to pay for every character after "metric_name:"? And what is with the Tags above: Do I pay for one tag like Tag1 or Unit in this example four times? In this example I just got four points in reality that are around 3000 points. In the moment I'm sending the information as an event to splunk. I think about to ingest them as metrics because i guess there are better in performance. Maybe another way is to send it as an event, split them and make mcollect, not sure what is the best way. 
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic s... See more...
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic switchover capability incase of an indexer goes down. Load balancing can be setup at UF in outputs.conf file in two ways:   By time By Volume   For time based load balancing we used autoLBFrequency setting and for volume we use autoLBVolume. Let's say I've three indexers on which I want to send data from UF. My outputs.conf file will look like below: [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997 Now, to send data for 3 minutes to an indexer, then switch to another indexer and then to another, set the autoLBFrequency like this: autoLBFrequency=180 Based on the above settings the UF will send data to indexer 10.10.10.1 for 3 minutes continuously then it will move towards the other indexers, and this loop will continue. To send data based on the volume. Let's say to configure the UF to send 1MB data to an indexer then switch to another indexer in the list, the setting will look like below autoLBVolume=1048576 In the cases of a very large file, such as a chatty syslog file, or loading a large amount of historical data, the forwarder may become "stuck" on one indexer, trying to reach EOF before being able to switch to another indexer. To mitigate this, you can use the forceTimebasedAutoLB setting on the forwarder. With this setting, the forwarder does not wait for a safe logical point and instead makes a hard switch to a different indexer every AutoLB cycle. forceTimebasedAutoLB = true To guard against loss of data when forwarding to an indexer you can enable indexer acknowledgment capability. With indexer acknowledgment, the forwarder will resend any data that the indexer does not acknowledge as "received". useACK setting is used for this purpose useACK= true The final output.conf will look like below [tcpout] useACK= true autoLBFrequency=180 autoLBVolume=1048576 [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997
But it doesn't right, does it? Your query produce total_sent is for ACCOUNT_FROM, and total_received is for ACCOUNT_TO. Since ACCOUNT_FROM and ACCOUNT_TO are two different person then where total_se... See more...
But it doesn't right, does it? Your query produce total_sent is for ACCOUNT_FROM, and total_received is for ACCOUNT_TO. Since ACCOUNT_FROM and ACCOUNT_TO are two different person then where total_sent > total_received is not make sense. 
Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of ... See more...
Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of the new dashboard you can find the "Download letter of Authorization" button. Ciao. Giuseppe
how to download company certificate splunk partner
Try something like this | eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM | eventstats sum(AMOUNT) as total_received by ACCOUNT_TO | table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent total_r... See more...
Try something like this | eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM | eventstats sum(AMOUNT) as total_received by ACCOUNT_TO | table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent total_received INFO AMOUNT | where total_sent > total_received
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money... See more...
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money than they receive, and output all of their transaction (both send and receive).  My query is like so    index=myindex |eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM |eval temp=ACCOUNT_FROM |table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent INFO temp |join type=inner temp [search index=myindex |stats sum(AMOUNT) as total_received by ACCOUNT_TO |eval temp=ACCOUNT_TO] |where total_sent > total_receive   This query only produce the transaction at which that account sending out but not the transaction that that account receive. How do I go about this. I'm thinking about output the temp as an csv and inputlookup again in the db.
You need to split process up if you want to treat the parts of the command line up as separate things. Try this | eval parts=split(process," ") | search user_name=$user_name$ dest=$host_name$ proces... See more...
You need to split process up if you want to treat the parts of the command line up as separate things. Try this | eval parts=split(process," ") | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT parts IN ($exclude_command$)
Hi richgalloway, from my point of view the index and the datamodel fields are looking good.
Good Morning,  Found another reason in our case, why the searches are so slow (accelerating the CIM auth datamodel)    Our Network Operations Team activated Cisco TrustSec-Logging for one of our c... See more...
Good Morning,  Found another reason in our case, why the searches are so slow (accelerating the CIM auth datamodel)    Our Network Operations Team activated Cisco TrustSec-Logging for one of our customers...  Since this, we index more then 10 million TrustSec-Logs, where we apply the props and transforms to...  These Logs definitely don´t need all this knowledge... It is an easy KV structure, here an example...   <190>126269710: 126329890: Jul 19 07:56:50.999 CEST: %RBM-6-SGACLHIT: ingress_interface='TenGigabitEthernet2/1/7' sgacl_name='Permit_IP_Log-01' action='Permit' protocol='tcp' src-vrf='CUSTOM_LAN' src-ip='123.123.123.123' src-port='1234' dest-vrf='CUSTOM_LAN' dest-ip='234.234.234.234' dest-port='64399' sgt='0' dgt='16' logging_interval_hits='1'   an specific sourcetype for this type of logs make´s sence i think.   Regards, and thank you for all the answers.  Tobias
Hi @marka3721 , are you directly receiving f5 logs or is there an intermediate log collector? if there's an intermediate log collector, that probably modifies the log format, search in app's props.... See more...
Hi @marka3721 , are you directly receiving f5 logs or is there an intermediate log collector? if there's an intermediate log collector, that probably modifies the log format, search in app's props.conf and transforms.conf the regexed that apply the sourcetype override and check if your logs match these regexes. if not, open a case to Splunk Support because this app is Splunk supported. Ciao. Giuseppe
Can you please list which vulnerability scanner was used to determine this finding, and which pluginid? This information is used to see some information that is vital for us to really triage this : ... See more...
Can you please list which vulnerability scanner was used to determine this finding, and which pluginid? This information is used to see some information that is vital for us to really triage this : 1) See what the vulnerability scanner is actually looking for 2) See if Splunk is actually affected, as there are many times  I'm guessing this is from Qualys, and is QID 38599. I know this plugin is old, as it's mentioned in a community post on Qualys' website from 2014, and is related to the 'CRIME' attack. If I am correct on this being the plugin in question, its about CVE-2012-4929. There is an official Splunk response about this CVE here : https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-shows-vulnerable-to-CVE-2012-4929-in-my-Nessus/m-p/29092/highlight/true#M69597 Important parts from what the Splunk employee stated are : This is directed at web browsers hitting web servers, which are not any of the ports you listed. Splunk doesn't use SPDY at all This bug is more about web browsers hitting servers, and as such Splunk web won't mitigate against vulnerable browsers. That said, again, this is from 2012 and it's 2024 so it's unlikely anyone is using an affected browser and even if they were, 8089/8191/8088 are not web server ports.  I would write up documentation stating that this plugin should be ignored for this use case, as this vulnerability is no longer relevant on modern technology. 
@bowesmana is correct.  mvfind won't accept two variables.  Also as he says, single quote should be used to represent value in where command.  This is an alternative solution:   ``` the above emula... See more...
@bowesmana is correct.  mvfind won't accept two variables.  Also as he says, single quote should be used to represent value in where command.  This is an alternative solution:   ``` the above emulates index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound | where 'ASN Count' > 1 AND 'Agent Count' > 1 AND mvmap(ASN, if(ASN == ASNfound, "yes", "no")) == "no"   Here is an emulation of your illustrated data after lookup:   | makeresults format=csv data="Session ID, ASN, ASN Count, User Agent, Agent Count, User, Risk, ASNfound idxxxxxxxxxxxx ,\"12345 321\",2 ,\"UserAgent1 UserAgent2\",2, user@company.com, \"{reasons=Anomalous Device, level=MEDIUM}\", \"12345 321\" idxxxxxxxxxxxx, \"6789 321\",2, \"UserAgent1 UserAgent2\",2, user@company.com, \"{reasons=Anomalous Device, level=MEDIUM}\", \"12345 321\"" ``` the above emulates index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound ``` | where 'ASN Count' > 1 AND 'Agent Count' > 1 AND mvmap(ASN, if(ASN == ASNfound, "yes", "no")) == "no"   (The above uses a side effect of SPL's equality operator.) It gives ASN ASN Count ASNfound Agent Count Risk Session ID User User Agent 6789 321 2 12345 321 2 {reasons=Anomalous Device, level=MEDIUM} idxxxxxxxxxxxx user@company.com UserAgent1 UserAgent2 Play with it and compare with real data.
So when running the Splunk service, you do not want to be running it as root (which is primarily what sudo does). Since you have run some of the commands via sudo, that means some of the file permiss... See more...
So when running the Splunk service, you do not want to be running it as root (which is primarily what sudo does). Since you have run some of the commands via sudo, that means some of the file permissions most likely were changed to root owning it.  You would want to follow these steps: First, you need to ensure that the splunk user/group owns the files, since you have been running it as root (sudo) 1)  sudo chown -R splunk:splunk /opt/splunk Second, you want to become the splunk user 2)  sudo su splunk Then you want to run your commands as normal 3) ./splunk enable boot-start -user splunk or  ./splunk enable boot-start -user splunk -systemd-managed 1 if you are using systems on your system. By running the commands as the splunk user, you ensure that the splunk user maintains ownership over /opt/splunk, and that means that the enable boot start will be able to work. I think if you checked your linux logs, you would see during boot up there are probably permission errors stating that the user splunk does not have access to the /opt/splunk folder, due to the sudo issues. After doing this, while still as the splunk user you can run ./splunk start. If you don't want to do sudo su splunk, to become the user you can use something like this instead: sudo -H -u splunk $SPLUNK_HOME/bin/splunk start This will let you use sudo as your user, tell it to act as the splunk user, and then start splunk. This method of sudo usage could replace directly sudo su splunk if needed.
Thank you, I figure it out.
Here's the output from your provided search query, it ignoring the exclude input.
  Hi, Tom-san! Thank you for the advice. It seems the problem was that the "stateOnClient" setting was also in the app specification. When I removed that setting the error went away. [serverCl... See more...
  Hi, Tom-san! Thank you for the advice. It seems the problem was that the "stateOnClient" setting was also in the app specification. When I removed that setting the error went away. [serverClass:<serverClassName>:app:<AppName>] stateOnClient = noop   ←★ restartSplunkWeb = 0 restartsplunkd = 1 [serverClass:<serverClassName>] stateOnClient = noop whitelist.0 = server1 For reference, I would like to know if there is any other setting. As mentioned above, the DS server specifies serverRepositoryLocationPolicy and repositoryLocation for the cluster manager, so the DS-app is saved in manager-app. Is it possible to deploy the APP to the manager?  
@yuanliu mvfind() will not work with two potentially MV fields.