All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of ... See more...
Hi @nhana_mulyana , you go in the Partner Portal and access with your account, if you're correctly associated to your company, you can click on the Partner Company manage button. at the bottom of the new dashboard you can find the "Download letter of Authorization" button. Ciao. Giuseppe
how to download company certificate splunk partner
Try something like this | eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM | eventstats sum(AMOUNT) as total_received by ACCOUNT_TO | table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent total_r... See more...
Try something like this | eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM | eventstats sum(AMOUNT) as total_received by ACCOUNT_TO | table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent total_received INFO AMOUNT | where total_sent > total_received
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money... See more...
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money than they receive, and output all of their transaction (both send and receive).  My query is like so    index=myindex |eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM |eval temp=ACCOUNT_FROM |table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent INFO temp |join type=inner temp [search index=myindex |stats sum(AMOUNT) as total_received by ACCOUNT_TO |eval temp=ACCOUNT_TO] |where total_sent > total_receive   This query only produce the transaction at which that account sending out but not the transaction that that account receive. How do I go about this. I'm thinking about output the temp as an csv and inputlookup again in the db.
You need to split process up if you want to treat the parts of the command line up as separate things. Try this | eval parts=split(process," ") | search user_name=$user_name$ dest=$host_name$ proces... See more...
You need to split process up if you want to treat the parts of the command line up as separate things. Try this | eval parts=split(process," ") | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT parts IN ($exclude_command$)
Hi richgalloway, from my point of view the index and the datamodel fields are looking good.
Good Morning,  Found another reason in our case, why the searches are so slow (accelerating the CIM auth datamodel)    Our Network Operations Team activated Cisco TrustSec-Logging for one of our c... See more...
Good Morning,  Found another reason in our case, why the searches are so slow (accelerating the CIM auth datamodel)    Our Network Operations Team activated Cisco TrustSec-Logging for one of our customers...  Since this, we index more then 10 million TrustSec-Logs, where we apply the props and transforms to...  These Logs definitely don´t need all this knowledge... It is an easy KV structure, here an example...   <190>126269710: 126329890: Jul 19 07:56:50.999 CEST: %RBM-6-SGACLHIT: ingress_interface='TenGigabitEthernet2/1/7' sgacl_name='Permit_IP_Log-01' action='Permit' protocol='tcp' src-vrf='CUSTOM_LAN' src-ip='123.123.123.123' src-port='1234' dest-vrf='CUSTOM_LAN' dest-ip='234.234.234.234' dest-port='64399' sgt='0' dgt='16' logging_interval_hits='1'   an specific sourcetype for this type of logs make´s sence i think.   Regards, and thank you for all the answers.  Tobias
Hi @marka3721 , are you directly receiving f5 logs or is there an intermediate log collector? if there's an intermediate log collector, that probably modifies the log format, search in app's props.... See more...
Hi @marka3721 , are you directly receiving f5 logs or is there an intermediate log collector? if there's an intermediate log collector, that probably modifies the log format, search in app's props.conf and transforms.conf the regexed that apply the sourcetype override and check if your logs match these regexes. if not, open a case to Splunk Support because this app is Splunk supported. Ciao. Giuseppe
Can you please list which vulnerability scanner was used to determine this finding, and which pluginid? This information is used to see some information that is vital for us to really triage this : ... See more...
Can you please list which vulnerability scanner was used to determine this finding, and which pluginid? This information is used to see some information that is vital for us to really triage this : 1) See what the vulnerability scanner is actually looking for 2) See if Splunk is actually affected, as there are many times  I'm guessing this is from Qualys, and is QID 38599. I know this plugin is old, as it's mentioned in a community post on Qualys' website from 2014, and is related to the 'CRIME' attack. If I am correct on this being the plugin in question, its about CVE-2012-4929. There is an official Splunk response about this CVE here : https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-shows-vulnerable-to-CVE-2012-4929-in-my-Nessus/m-p/29092/highlight/true#M69597 Important parts from what the Splunk employee stated are : This is directed at web browsers hitting web servers, which are not any of the ports you listed. Splunk doesn't use SPDY at all This bug is more about web browsers hitting servers, and as such Splunk web won't mitigate against vulnerable browsers. That said, again, this is from 2012 and it's 2024 so it's unlikely anyone is using an affected browser and even if they were, 8089/8191/8088 are not web server ports.  I would write up documentation stating that this plugin should be ignored for this use case, as this vulnerability is no longer relevant on modern technology. 
@bowesmana is correct.  mvfind won't accept two variables.  Also as he says, single quote should be used to represent value in where command.  This is an alternative solution:   ``` the above emula... See more...
@bowesmana is correct.  mvfind won't accept two variables.  Also as he says, single quote should be used to represent value in where command.  This is an alternative solution:   ``` the above emulates index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound | where 'ASN Count' > 1 AND 'Agent Count' > 1 AND mvmap(ASN, if(ASN == ASNfound, "yes", "no")) == "no"   Here is an emulation of your illustrated data after lookup:   | makeresults format=csv data="Session ID, ASN, ASN Count, User Agent, Agent Count, User, Risk, ASNfound idxxxxxxxxxxxx ,\"12345 321\",2 ,\"UserAgent1 UserAgent2\",2, user@company.com, \"{reasons=Anomalous Device, level=MEDIUM}\", \"12345 321\" idxxxxxxxxxxxx, \"6789 321\",2, \"UserAgent1 UserAgent2\",2, user@company.com, \"{reasons=Anomalous Device, level=MEDIUM}\", \"12345 321\"" ``` the above emulates index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound ``` | where 'ASN Count' > 1 AND 'Agent Count' > 1 AND mvmap(ASN, if(ASN == ASNfound, "yes", "no")) == "no"   (The above uses a side effect of SPL's equality operator.) It gives ASN ASN Count ASNfound Agent Count Risk Session ID User User Agent 6789 321 2 12345 321 2 {reasons=Anomalous Device, level=MEDIUM} idxxxxxxxxxxxx user@company.com UserAgent1 UserAgent2 Play with it and compare with real data.
So when running the Splunk service, you do not want to be running it as root (which is primarily what sudo does). Since you have run some of the commands via sudo, that means some of the file permiss... See more...
So when running the Splunk service, you do not want to be running it as root (which is primarily what sudo does). Since you have run some of the commands via sudo, that means some of the file permissions most likely were changed to root owning it.  You would want to follow these steps: First, you need to ensure that the splunk user/group owns the files, since you have been running it as root (sudo) 1)  sudo chown -R splunk:splunk /opt/splunk Second, you want to become the splunk user 2)  sudo su splunk Then you want to run your commands as normal 3) ./splunk enable boot-start -user splunk or  ./splunk enable boot-start -user splunk -systemd-managed 1 if you are using systems on your system. By running the commands as the splunk user, you ensure that the splunk user maintains ownership over /opt/splunk, and that means that the enable boot start will be able to work. I think if you checked your linux logs, you would see during boot up there are probably permission errors stating that the user splunk does not have access to the /opt/splunk folder, due to the sudo issues. After doing this, while still as the splunk user you can run ./splunk start. If you don't want to do sudo su splunk, to become the user you can use something like this instead: sudo -H -u splunk $SPLUNK_HOME/bin/splunk start This will let you use sudo as your user, tell it to act as the splunk user, and then start splunk. This method of sudo usage could replace directly sudo su splunk if needed.
Thank you, I figure it out.
Here's the output from your provided search query, it ignoring the exclude input.
  Hi, Tom-san! Thank you for the advice. It seems the problem was that the "stateOnClient" setting was also in the app specification. When I removed that setting the error went away. [serverCl... See more...
  Hi, Tom-san! Thank you for the advice. It seems the problem was that the "stateOnClient" setting was also in the app specification. When I removed that setting the error went away. [serverClass:<serverClassName>:app:<AppName>] stateOnClient = noop   ←★ restartSplunkWeb = 0 restartsplunkd = 1 [serverClass:<serverClassName>] stateOnClient = noop whitelist.0 = server1 For reference, I would like to know if there is any other setting. As mentioned above, the DS server specifies serverRepositoryLocationPolicy and repositoryLocation for the cluster manager, so the DS-app is saved in manager-app. Is it possible to deploy the APP to the manager?  
@yuanliu mvfind() will not work with two potentially MV fields.
Unfortunately event type searches cannot contain any pipelines, so it has to be simply a raw search fragment  
Does Splunk Enterprise support URI Templates to extend the REST API? I'd like to be able to expose an api such as /resource/{name}/dosomething  
sparkline requires the _time field to work, so in your case, you have two stats commands, so the _time field is lost after your first stats. You can do this index=_internal source="/opt/splunk/var/... See more...
sparkline requires the _time field to work, so in your case, you have two stats commands, so the _time field is lost after your first stats. You can do this index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sparkline(sum(b)) as trend sum(b) as Usage by idx | eval trend=mvmap(trend, if(isnum(trend), round(trend/1024/1024/1024,2), trend)) | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | sort Usage This first stats will make the sparkline in sum of bytes.  The mvmap() command is a trick to convert the sparkline values into GiB figures like you are doing with Usage. A sparkline is simply a special form of a multivalue field where the first element is the value ##__SPARKLINE__## so this just iterates through the values, ignoring the first and rounds each of the values, so the sparkline also shows in GiB.  
If you forwarder is not forwarding the complete file, there might be a problem with linebreaker.  This has nothing to do with how to search.  Getting Data In is a better forum.
Assuming you have lookup file asn_user.csv and you set up a lookup called asn_user.csv (my preference is to not use .csv in lookup name, but many others do not make the distinction), you can do inde... See more...
Assuming you have lookup file asn_user.csv and you set up a lookup called asn_user.csv (my preference is to not use .csv in lookup name, but many others do not make the distinction), you can do index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound | where "ASN Count" > 1 AND "Agent Count" > 1 AND NOT mvfind(ASNfound, ASN)