All Topics

Top

All Topics

So I have been trying to submit the technical add-on, but this TLS certificate issue is perplexing. We are using requests.post in my app to make API calls to our product server from Splunk. After the... See more...
So I have been trying to submit the technical add-on, but this TLS certificate issue is perplexing. We are using requests.post in my app to make API calls to our product server from Splunk. After the initial response from the App Review team, we added an option for the users to add a location to their own certificate which is then used in the requests.post.But the response that we have gotten from the App Review team is to "make sure you bundle your own CA certs as part of your app and pass the path into requests.post as an arg."Does that mean we set up a private CA, generate the certificate, and bundle it with the app? Why do we still need to bundle the certificate when we have given it as an option to the end user? Lastly, if we generate the certificate, can the same certificate be used by all app users? In our case, every customer has their own instance of our product just like every user has their own Splunk instance.
Is there any command in Splunk for Looping other than Map command ? Requirement is described as below: I can't provide the data here but I can explain the scenario with an example. Example:... See more...
Is there any command in Splunk for Looping other than Map command ? Requirement is described as below: I can't provide the data here but I can explain the scenario with an example. Example: Let's say we have Data Base Manager tool which is managing all the DB Connections/Sessions. Users should log on to DB Manger tool first using their personal account and then initiate a connection to Database using the Database account. Each DB Manager Session Initiation log contains the following parameters: 1. User Account Name, 2. DB Account Name 3. DB Name 4. Session Initiation Time Each DB Login event contain the following parameters: 1. DB Account name 2. DB Name 3. Session Start time Hence, each login event on DB will have a corresponding session Initiation log on DB Manager tool. Let's assume the maximum Time difference between these two logs is 30 mins, it means for each DB Login Event we should have a log on DB Manager in the last 30 minutes. Now the requirement is to create a Report for all the DB Login Events and add the parameter "User Account Name" from the DB Manger tool session initiation log corresponding to it.
I have a specific event that I'm looking to do an average count for the past 5 business days. Right now, I'm able to get the weekly average with the following search, but want to restrict that coun... See more...
I have a specific event that I'm looking to do an average count for the past 5 business days. Right now, I'm able to get the weekly average with the following search, but want to restrict that count to only business days, so that the average is more reflective of a normal workday. Including weekends significantly lowers the running average, so the information isn't helpful.     source="wineventlog:application" EventCode=9999 | timechart span=7d count as Avg | eval Avg=round(Avg/7,2)     Thanks in advance for any assistance.  
Hi, I was looking for the retention period for my index main and _internal and noticed the data is outide our default frozenTimePeriodInSecs. Standard this is 555 days. I went searching for the b... See more...
Hi, I was looking for the retention period for my index main and _internal and noticed the data is outide our default frozenTimePeriodInSecs. Standard this is 555 days. I went searching for the buckets which are outside the normal time with the searches from this answer: https://community.splunk.com/t5/Getting-Data-In/bucket-retention-and-frozenTimePeriodInSecs/m-p/116365 and to my surprise.. the buckets are al hot.. and way old.  What bothers me are those buckets are all in folder called hot_quar_v1_xxxx and the normal folders starting with previous xxxxx from the quar folder. What to do with this data? I want to reduce the frozenTimePeriodInSecs from 555 days to say 30 days.. but it doesn't seems to work this way if there are buckets as old as 100961834 secs in those hot_quar folders. Is there a way to clean those folders and set the frozenTimePeriodInSecs?   thanx in advance Jari
Hello Splunk Community, to get into ReactJS and the Splunk UI-Toolkit i created a small App with a Component, thats fetching the Splunk-Roles and the LDAP-groups from splunks REST-API for easy mappi... See more...
Hello Splunk Community, to get into ReactJS and the Splunk UI-Toolkit i created a small App with a Component, thats fetching the Splunk-Roles and the LDAP-groups from splunks REST-API for easy mapping of matching groups and roles. Fetching of the roles work, but the groups dont. Codeside i fetch against '/splunkd/services/admin/LDAP-groups?output_mode=json' That respond with 303 see other and refers to '<locale>/splunkd/services/admin/LDAP-groups?output_mode=json' where i get the 404 Page not Found response. Searching for the requestId shows following: proxy:132 - Resource not found: services/admin/LDAP-groups error:321 - Masking the original 404 message: 'Resource not found: services/admin/LDAP-groups' with 'Page not found!' for security reasons My User has the change_authentication capability, so theoretically it should work. Am i doing something wrong? Or is that endpoint denied by default if its not splunk itself?    
Hello, I want to understand if is it possible to monitor SAP E-commerce Could application, deployed on SAP Public Cloud in AppDynamics? If yes what are the scopes with AppDynamics. Thanks
I have a bar chart in splunk which has x-axis as each week from 2019 to 2023 and y-axis as count of data. Now i want to get only the latest 10 weeks in x-axis
Hello Splunkers!! As per the below search you can see we have used join commands to get the results from same index & sourcetype. Due to multiple join commands the query become slow. Please help me... See more...
Hello Splunkers!! As per the below search you can see we have used join commands to get the results from same index & sourcetype. Due to multiple join commands the query become slow. Please help me how can I use single join command to get the result from all the fields "| fields - Total_Orders, Errors, Technical_Error, Operational_Error" <search> | join max=0 _time [| search ((index=* OR index=_*) index=abc sourcetype=abc) | fields + _time, host, source, sourcetype, Active, ErrorCode, ErrorDescription, ErrorDuration, ErrorId, From, Id, Location, ModuleId, OperationalWeighingFactor, ShuttleId, TechnicalWeighingFactor, TraceFlags, TraceId, TraceVersion, Version, date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year, index, Recoverable | eval weeknum=strftime('_time',"%V") | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | search (date_year="*" date_month="*" weeknum="*" day_week="*" date_hour="*" date_minute="*" ShuttleId=*) | fields + Id, _time, ErrorId, ErrorDescription | table ErrorId, _time | timechart span="1d@d1" count(ErrorId) as "Errors"] | sort 0 _time | fillnull Total_Orders Errors value="0" | eval Total_Error_Per_10000_Order=round(((Errors / Total_Orders) * 10000),0) | join max=0 _time [| search ((index=* OR index=_*) index=abc sourcetype=abc) | fields + _time, host, source, sourcetype, Active, ErrorCode, ErrorDescription, ErrorDuration, ErrorId, From, Id, Location, ModuleId, OperationalWeighingFactor, ShuttleId, TechnicalWeighingFactor, TraceFlags, TraceId, TraceVersion, Version, date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year, index, Recoverable | eval weeknum=strftime('_time',"%V") | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | search (date_year="*" date_month="*" weeknum="*" day_week="*" date_hour="*" date_minute="*" ShuttleId=*) | fields + Id, _time, ErrorId, ErrorDescription, TechnicalWeighingFactor | rename TechnicalWeighingFactor as Technical_Error | table _time, ErrorId, Technical_Error | search Technical_Error>0.01 | timechart span="1d@d1" count(Technical_Error) as "Technical_Error" | fillnull Technical_Error value="0"] | fillnull Total_Orders Technical_Error value="0" | eval Technical_Error_Per_10000_Order=round(((Technical_Error / Total_Orders) * 10000),0) | join max=0 _time [| search ((index=* OR index=_*) index=abc sourcetype=abc) | fields + _time, host, source, sourcetype, Active, ErrorCode, ErrorDescription, ErrorDuration, ErrorId, From, Id, Location, ModuleId, OperationalWeighingFactor, ShuttleId, TechnicalWeighingFactor, TraceFlags, TraceId, TraceVersion, Version, date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year, index, Recoverable | eval weeknum=strftime('_time',"%V") | eval date_year=strftime('_time',"%Y"), date_month=strftime('_time',"%B"), day_week=strftime('_time',"%A"), date_mday=strftime('_time',"%d"), date_hour=strftime('_time',"%H"), date_minute=strftime('_time',"%M") | search (date_year="*" date_month="*" weeknum="*" day_week="*" date_hour="*" date_minute="*" ShuttleId=*) | fields + Id, _time, ErrorId, ErrorDescription, OperationalWeighingFactor | rename OperationalWeighingFactor as Operational_Error | table _time, ErrorId, Operational_Error | search Operational_Error>0.01 | timechart span="1d@d1" count(Operational_Error) as "Operational_Error" | fillnull Operational_Error value="0"] | fillnull Total_Orders Operational_Error Technical_Error value="0" | eval Operational_Error_Per_10000_Order=round(((Operational_Error / Total_Orders) * 10000),0) | fields - Total_Orders, Errors, Technical_Error, Operational_Error
I have three queries: Overall Traffic to LogOn page sourcetype="od" operation=LogOn http_method=GET http_url="*LogOn*" |  timechart count span=1m OAuth1 Traffic to LogOn page sourcetype="od" oper... See more...
I have three queries: Overall Traffic to LogOn page sourcetype="od" operation=LogOn http_method=GET http_url="*LogOn*" |  timechart count span=1m OAuth1 Traffic to LogOn page sourcetype="od" operation=LogOn http_method=GET http_url="*LogOn*" http_url!="*authorization.ping*" identity_consumer_key!="" |  timechart count span=1m OAuth2 Traffic to LogOn page This is what i wrote sourcetype="oxygen-standard" identity_operation=LogOn http_method=GET | eval url= case(http_url=="*LogOn*","Overall", http_url=="*LogOn*" http_url!="*authorization.ping*" identity_consumer_key!="","OAuth1", http_url=="*authorization.ping*","OAuth2") | stats count by url It is not allowing multiple checks in one case sourcetype="od" operation=LogOn http_method=GET http_url="*authorization.ping*"  | timechart count span=1m How can i combine these three to show in one timechart?
Hi  Looking for guidance related to integrating Splunk on-premise infrastructure with 3rd party SaaS providers. We have a SaaS provider who is exposing the data over Rest API and what's best way to ... See more...
Hi  Looking for guidance related to integrating Splunk on-premise infrastructure with 3rd party SaaS providers. We have a SaaS provider who is exposing the data over Rest API and what's best way to consume them from an Splunk enterprise version? Is there an officially supported Splunk ad-on or modular support that allow us to enable this via some simple configuration rather than building something on our own?  
After trying to get my head around the settings in indexes.conf to do data retention, and trying numerous different approches - I've decided to ask for guidance here. Pretext - I have a splunk inde... See more...
After trying to get my head around the settings in indexes.conf to do data retention, and trying numerous different approches - I've decided to ask for guidance here. Pretext - I have a splunk indexer with approximately 50 indexes. I want to set up an indexes.conf with as few as reasonably possible per-index settings to keep this file small and manageable. For hot/warm storage I save buckets on the SSD backed storage of the server itself. ~8TB available Cold storage is moved off to a NAS on the network - ~100TB available No frozen storage - i.e. data should be deleted after 1 year. I would like to set up indexes.conf to: If any individual index has hot/warm data larger than 100GB > roll to cold (I would actually prefer to do this based on age - say 60 days, not size - but seems this is not standard functionality in Splunk If any data in any individual index is older than 1 year - permanently delete it.   The definition of the hot/warm/cold storage is set up as follows and I think this is correct.   Any suggestions on how I can achieve the ask above would be appreciated. Preferably by using the [default] stanza as much as possible, or wildcard stanzas if that is possible. I would hate to have to define the same settings for 50 indexes, but if there's no other way, then so be it.  
I've been trying to write an alert that notifies our SOC when someone tries to obfuscate their command with base64 encoding.   I used to be able to append a decoded output (using decrypt2) to the c... See more...
I've been trying to write an alert that notifies our SOC when someone tries to obfuscate their command with base64 encoding.   I used to be able to append a decoded output (using decrypt2) to the command line run doing something like this (very simplified)   |<search for base64 encoded commands> | decrypt field=encoded atob emit('decrypted') | fields - encoded | eval decrypted="decrypted: ".replace(replace(replace(replace(decrypted,"\.\.\.","~&_&~"),"\.","")," "," "),"~&_&~",".") | eval command_line="command_line: ".command_line | eval WhatRan=mvappend(command_line," ",decrypted)     works great in my search, but when I alert on it, the field "decrypted" does not get appended to "WhatRan".  I can click on the alert search result and not see the value of "decrypted" in "WhatRan", and then immediately run the search and get the value of "decrypted" in "WhatRan"   What makes this happen?  Can it be corrected?
G'day, Can someone please help me to understand how I can find the powershell commands (if any) an adversary has run on the system through Splunk data? I have all the windows security and powershel... See more...
G'day, Can someone please help me to understand how I can find the powershell commands (if any) an adversary has run on the system through Splunk data? I have all the windows security and powershell logs available. Just not sure how to do up a query that would not just find but also list the full  ps command when it finds one. Danke!
Hello fellows! I have a sourcetype called cmdb with a field called BIA to any src_host. After this join index=lab sourcetype=A | join type=left src_host [search index=lab sourcetype=cmdb] Mo... See more...
Hello fellows! I have a sourcetype called cmdb with a field called BIA to any src_host. After this join index=lab sourcetype=A | join type=left src_host [search index=lab sourcetype=cmdb] Most of the src_host now figures with the BIA field, but some of them don't. It's OK, because they do not exist on cmdb sourcetype. I want to fix the value of the BIA field for this hosts. I try to use a lot of tings like... ​| eval BIA = if( len(BIA)==0, "FIX", BIA) but is not running fine. Can someone help me?
Hello Splunk Community. the server where splunk is located will be moved from the datacenter location to another city, And I have questions about what I would need to do in splunk so that it doesn... See more...
Hello Splunk Community. the server where splunk is located will be moved from the datacenter location to another city, And I have questions about what I would need to do in splunk so that it doesn't lose any information or if splunk will work just starting the service. are there some Application pre-migration task to do? or we just need to turn off the splunk app and service? it is only one splunk instance working as a index and search but we have a lot of data receiving from universal forwarders.   Thank you in advance.
I'm using this code which gives me the information but I need current total users    (index=* OR index=_*) (index=* OR index=_*) index=wineventlog "ComputerName=sample_server" "EventCode=4624" | ... See more...
I'm using this code which gives me the information but I need current total users    (index=* OR index=_*) (index=* OR index=_*) index=wineventlog "ComputerName=sample_server" "EventCode=4624" | fields "_time", "user" | dedup "user"  (JFI -- I USED 15 MIN TIME FRAME)
Hi,  I have set up a scheduled report that runs every hour and writes the result set to a csv file. Activity->Jobs shows that the report was run per schedule, but I don't see the expected result in t... See more...
Hi,  I have set up a scheduled report that runs every hour and writes the result set to a csv file. Activity->Jobs shows that the report was run per schedule, but I don't see the expected result in the output csv. The job manager shows the search returned 0 events, but when I open the job link, I see more than 6000 results in the table. Why do the events in the show up as 0 in the job manager?  
I am looking to include certain fields that are in the contributing events for a certain Correlation Search/Notable.  The documentation I found: https://docs.splunk.com/Documentation/ES/latest/Adm... See more...
I am looking to include certain fields that are in the contributing events for a certain Correlation Search/Notable.  The documentation I found: https://docs.splunk.com/Documentation/ES/latest/Admin/Customizenotables This basically says you can add additional fields,  but this will apply to all Notables in Incident Review.  My question is if other notables that have different correlation searches don't include an additional field what happens?  Does it just not get displayed in that Notable or does it list the field with a null value in the Incident Review Dashboard?
Hi guys! I have a sourcetype "A" with some info about infrastructure. Host IP is one of this info. I have another sourcetype "B" (same index) that have a list of critical IPs. What I'm trying ... See more...
Hi guys! I have a sourcetype "A" with some info about infrastructure. Host IP is one of this info. I have another sourcetype "B" (same index) that have a list of critical IPs. What I'm trying to do is to use eval and IF to set a field value if the IP from sourcetype "A" are present on sourcetype "B". Fixing the IP value on the subsearch, the result is correct. index=lab | eval XPTO=if([ search index=lab sourcetype=B IP="192.168.1.2"],1,0) but I need to pass the IP dynamically from the main search. Something like...  index=lab | eval XPTO=if([ search index=lab sourcetype=B IP=$IP$],1,0) it's a simple question, like an Excel VLOOKUP function. Did you have a suggestion?
Hi I want to write the props for below logs. Actually the logs are coming with no timestamp and the file name having the timestamp.  These are the logs: Message Is: https POST failed: . Status Is... See more...
Hi I want to write the props for below logs. Actually the logs are coming with no timestamp and the file name having the timestamp.  These are the logs: Message Is: https POST failed: . Status Is: Ok Message Is: https POST successful: 200. Status Is: Ok Changed .Pac File to http://liteway.prog2.com/proxyins/proxy_client.oac Unable to change .Pac File to http://liteway.prog2.com/proxyins/proxy_client.oac File name coming like  zscalerhttp_2023-01-09-18-03-25 Can anyone help to write the props for this logs..