All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Community!  Despite lots of reading and doing my best to get the answer from documentation, I can't see why the introduction of a deployment server is causing issues with the data getting into Sp... See more...
Hi Community!  Despite lots of reading and doing my best to get the answer from documentation, I can't see why the introduction of a deployment server is causing issues with the data getting into Splunk Cloud.  So I'd really appreciate some help. I have 4 test servers and I've completed the steps below:  Intermediate forwarders have Splunk cloud as forwarding servers in outputs.conf file. this is also verified when issuing cli commands. Intermediate forwarders have a receiving port configured Intermediate forwarders have the Splunk Cloud credentials installed  When the UF have the intermediate forwarders set as the forwarding server the cli shows that this config is good -> and ultimately the forwarding client data reaches the cloud index no problem.  However when I introduce the deployment server, this is where no data reaches Splunk cloud and issuing cli commands the forwarding client just hangs.  When I look in /etc/deployment-app/<app-folder>/default - there's no outputs.conf file on the deployment server and so I feel that the server config is missing something.  I've used this guide as a setup reference for the deployment server but I still feel like I've missed something.  https://docs.splunk.com/Documentation/SplunkCloud/8.1.2011/Admin/WindowsGDI (specifically step 3) Any suggestions would be really appreciated, Thanks!     
Upon fresh installation or even during restart of splunk instance, it says  The Splunk web interface is at https://servername:8000 but when I tried to access the link using the said url, it's not w... See more...
Upon fresh installation or even during restart of splunk instance, it says  The Splunk web interface is at https://servername:8000 but when I tried to access the link using the said url, it's not working as well as server's fqdn https://servername.dns.test:8000 . I can only access the splunk web url using its ip address ex:  https://10.xx.x.xx:8000   Is this related to DNS setup of server? Any idea to this issue.   Thanks.
  Sending email alert when the error count > 0 results;  but how can include table data/values in the email alert?  ( table _time, ERROR_CD, HAWB, UREF, LRN, MRN, ER1_ER9_Details ) Query:  index=... See more...
  Sending email alert when the error count > 0 results;  but how can include table data/values in the email alert?  ( table _time, ERROR_CD, HAWB, UREF, LRN, MRN, ER1_ER9_Details ) Query:  index=gbs_its_openshift_exp-ics2 openshift_container_name="regulatory-engine" "ER1/ER9 errors" | rex field=_raw "uref:(?<UREF>\w+)" | rex field=_raw "hawb:(?<HAWB>\w+)" | rex field=_raw "lrn:(?<LRN>\w+)" | rex field=_raw "mrn:(?<MRN>\w+)" | rex field=_raw "rrr:(?<RRR>\w+)" | rex field=_raw "ER1\/ER9\serrors:(?<ER1_ER9_Details>.+)" | rex field=_raw "Err-\[(?<ERROR_CD>\w*)\]" | table _time, ERROR_CD, HAWB, UREF, LRN, MRN, ER1_ER9_Details | stats count | search count > 0
We are facing issue while parsing the lengthy Json file. Splunk is picking up incomplete data. Attaching the specifications of source type used, any help would be appreciated. Thanks!!    
When Okta Identity Cloud Add-on for Splunk saves log data in okta into Splunk, Japanese letters are converted into Unicode-escaped state and not unescaped. example: Letters "田中" in original log a... See more...
When Okta Identity Cloud Add-on for Splunk saves log data in okta into Splunk, Japanese letters are converted into Unicode-escaped state and not unescaped. example: Letters "田中" in original log are saved in splunk as converted letters, that is, "\u7530\u4e2d". Therefore, we cannot reach logs we would like to see by searching with Japanese letters. example: I expect the statement below to search logs including "田中", but actually nothing are found: index="okta_logs" 田中 To fix this, I think you need to modify the source code of Okta Identity Cloud add-on. I ask Okta Identity Cloud Add-on for Splunk to have a function to Unicode-unescape multi-byte letters.
Hi there how can I omit activity and help view of a splunk navigation bar for a user? Best Regards,
Hi, My environment consists of a deployment server and Heavy  Forwarders. The Windows Server clients have a Universal forwarder sending to the Heavy Forwarder. Can someone tell me where the configu... See more...
Hi, My environment consists of a deployment server and Heavy  Forwarders. The Windows Server clients have a Universal forwarder sending to the Heavy Forwarder. Can someone tell me where the configuration file is within the Windows server that tells it which heavy forwarder to send the data to?    
Hi, We have a use-case where responses(host_addr) returned from DNS queries are passed through AbuseIPDB API to check for any potential matches.  Since the API has a set limit we dont want to query ... See more...
Hi, We have a use-case where responses(host_addr) returned from DNS queries are passed through AbuseIPDB API to check for any potential matches.  Since the API has a set limit we dont want to query an IP more than once. To achieve this, stats is used to get distinct values and then it is passed through the API. It works well but due to the use of "stats", we lose all the other crucial fields from the original data, e.g. src_ip, query etc. Here's a sample query: <Base Search> | stats count by host_addr | table host_addr | abuseip ipfield=host_addr | sort - AbuseConfidence Could eventstats come to the rescue here? If so, what could be a potential syntax of that search? From the other examples I saw, eventstats sees to be more useful when performing a actual stats function like sum etc. End goal is to create something like | table src_ip, query, host_addr, LastReportedAt, AbuseConfidence  but also keeping API limits in check(Using only unique values of host_addr). Any pointers on this will be appreciated. Thanks, ~ Abhi  
Hello, I'd like to see the possibility of a new dashboard interface where it won't take us to a new page when we click on a dashboard name. Instead, I want all the dashboards listed on the left pane... See more...
Hello, I'd like to see the possibility of a new dashboard interface where it won't take us to a new page when we click on a dashboard name. Instead, I want all the dashboards listed on the left panel (marked red in below pic) and it will show all the contents on the larger panel on the right side (marked blue).  Thanks!
I created a deployment app (which distributes to Windows Universal Forwarders), from my Linux Deployment Server. Inside Windows\Local\ I have an inputs.conf file looks like this: [WinEventLog://Syst... See more...
I created a deployment app (which distributes to Windows Universal Forwarders), from my Linux Deployment Server. Inside Windows\Local\ I have an inputs.conf file looks like this: [WinEventLog://System] blacklist = EventCode=xxxx When the app gets delivered to the Windows Universal Forwarders, the input.conf file in the deployed app looks like this: [WinEventLog://System]blacklist = EventCode=xxxx The contents in the Inputs.conf is all in one line causing the blacklisting not to work. Any ideas on what I'm doing wrong? Jon
We have a custom streaming search command written in python that works fine on a single instance, but ran into the following error in our cluster environment: ImportError: No module named {a python ... See more...
We have a custom streaming search command written in python that works fine on a single instance, but ran into the following error in our cluster environment: ImportError: No module named {a python python package we depend on} We narrowed down the cause to the same problem described in this post: Custom streaming search command error We implemented the suggested fix of moving all of our dependencies to the /bin directory so it would be available when our command is run across indexers. Everything now seems to work as expected in our cluster environment, however appinspect now gives us this warning: check_splunklib_dependency_under_bin_folder WARNING: splunklib is found under bin folder, this may cause some dependency management errors with other apps, and it is not recommended. Please follow examples in Splunk documentation to include splunklib. You can find more details here: https://dev.splunk.com/view/SP-CAAAEU2 and https://dev.splunk.com/view/SP-CAAAER3 Has anyone else run into something similar to the original problem I described? Are there any suggestions that would solve both the original problem and our new warning?
Hello,  I have an ask to calculate TPS (Avg and Peak) for API calls that took 1) <1sec to respond, 2) calls that took between  3-5 secs 3) calls that took >5secs Based on Multiple different API ... See more...
Hello,  I have an ask to calculate TPS (Avg and Peak) for API calls that took 1) <1sec to respond, 2) calls that took between  3-5 secs 3) calls that took >5secs Based on Multiple different API calls Something like this Route <1s Avg TPS <1s Max TPS 3-5sec Avg TPS 3-5sec Max TPS >5sec Avg TPS >5sec max TPS                                             I am able to get them separately in multiple splunk queries like below, but i need them with the breakdown of event response time as above.  Below are my queries 1) index=XXX service_name=YYY request_host=ZZZ | rex field=_raw "AAA" | rex field=request_route "^(?<route>.*)\?" | rex field=_id "^(?<route>.*)\?" | eval pTime = total_time | eval TimeFrames = case(pTime<=1000, "0-1", pTime>1000 AND pTime<=3000, "1-3", pTime>3000 AND pTime<=5000, "3-5", pTime>5000 AND pTime<=8000, "5-8", pTime>8000, ">8") | stats count as CallVolume by route, TimeFrames | eventstats sum(CallVolume) as Total by route | eval Percentage=(CallVolume/Total)*100 | sort by route, -CallVolume | fields route,CallVolume,TimeFrames,Percentage | chart values(CallVolume) over route by TimeFrames | sort -TimeFrames   2) TPS: index=XXX service_name=YYY request_host=ZZZ | rex field=_raw "AAA" | rex field=request_route "^(?<route>.*)\?" | eval resptime = total_time | bucket _time span=1s | stats count as TPS by _time,route | stats max(TPS) as PeakTPS, avg(TPS) as AvgTPS by route | fields route, PeakTPS, AvgTPS | sort PeakTPS desc   Can you please help ? 
Trying to modify this default correlation search: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as lastTime by user | where ((now()-'lastTime')/86400)>90 I wan... See more...
Trying to modify this default correlation search: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as lastTime by user | where ((now()-'lastTime')/86400)>90 I want to exclude from this search if the field "user" includes a value that begins with "bob" Thanks in advance  
Hello @jkat54, Currently, we are utilizing the Log Analytics TA for ingesting Azure SQL data into Splunk.  Although the data is being ingested, we do come across situations where the data ingestion ... See more...
Hello @jkat54, Currently, we are utilizing the Log Analytics TA for ingesting Azure SQL data into Splunk.  Although the data is being ingested, we do come across situations where the data ingestion stops.  In order to fix the issue, we have a manual workaround where we disable and then enable inputs.  Based on the parameters provided ... [log_analytics://AZURESQL_CONNECT] resource_group = <resource_group> workspace_id = <workspace_id> subscription_id = <subscription_id> tenant_id = <tenant_id> application_id = <application_id> application_key = ******** log_analytics_query = AzureDiagnostics | where ResourceProvider == 'MICROSOFT.SQL' | where ResourceGroup contains 'CONNECT' | where Category == 'SQLSecurityAuditEvents' start_date = 07/10/2020 09:00:00 event_delay_lag_time = 10 index = <index> interval = 300 sourcetype = <sourcetype> Is there a good way to optimize the inputs for consistent data ingestion?  Any advice or suggestions would be appreciated.  Thanks. Regards, Max  
While following this https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-teams-data.html ,setup the webhook url and when trying to fetch subscription/call record data, I am getting ... See more...
While following this https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-microsoft-teams-data.html ,setup the webhook url and when trying to fetch subscription/call record data, I am getting this error in logs: 2021-01-15 15:03:15,233 INFO pid=20819 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2021-01-15 15:03:15,584 ERROR pid=20819 tid=MainThread file=base_modinput.py:log_error:309 | Could not create subscription: 400 Client Error: Bad Request for url: https://graph.microsoft.com/beta/subscriptions In TA setup, proxy configuration needs host,port,username password. what this should be? Also where i can find the webhook URL to test the input created with teams webhook input using curl command as mentioned in the blog post. @jconger  Thanks,
Can I install AppDynamics .NET Agent on Windows Server 2008 R2 ?
Hello Splunkers we are trying to restrict users (non admins) from creating knowledge objects (dashboards and reports) in our custom apps. we would like users to use the dashboards created but not sa... See more...
Hello Splunkers we are trying to restrict users (non admins) from creating knowledge objects (dashboards and reports) in our custom apps. we would like users to use the dashboards created but not save any dashboards.  we have not given write permissions to users for the app. Also we default.meta config file we added the below stanza  [viewstates] access = read : [ * ], write : [ admin ] export = system   [views] access = read : [ * ], write : [ admin ]   we have few groups who have write access to default search and reporting app. is this causing users to create KO's in custom app?  if not can someone suggest me if there is anyway to restrict users doing so.  
Hello everyone I need help to send information to 2 indexers. By request of the client I need to send information from a heavy forwarder to an indexer A, if indexer A goes down the information must... See more...
Hello everyone I need help to send information to 2 indexers. By request of the client I need to send information from a heavy forwarder to an indexer A, if indexer A goes down the information must reach indexer B and when indexer A is back online the heavy forwarder must send the information back to the indexer A. The priority must be in indexer A I need to know how to tell the heavy forwarder that when indexer A is online it sends the data to indexer A.   Thank you
Hi All, I have one requirement. I have one dropdown as below:   <input type="dropdown" token="OrgName" searchWhenChanged="true"> <label>Org Name</label> <choice value="accertify">accertify</cho... See more...
Hi All, I have one requirement. I have one dropdown as below:   <input type="dropdown" token="OrgName" searchWhenChanged="true"> <label>Org Name</label> <choice value="accertify">accertify</choice> <choice value="gcp">gcp</choice> <choice value="b_marketingforce">b_marketingforce</choice> <initialValue>gcp</initialValue> <default>gcp</default> </input> Below is my query for the panel where I am passing the drop down value like below: <query>|inputlookup HealthCheck.csv| where $OrgName$|table Date OrgHealth%</query> But its not taking the token. Can someone guide me .How I need to pass the token.
Greetings,  I have an architectural question about an on-prem to Azure/AWS.  It is a complex question, so I'll try to keep it simple.  Assume you have a very large Splunk footprint 20+ indexers with ... See more...
Greetings,  I have an architectural question about an on-prem to Azure/AWS.  It is a complex question, so I'll try to keep it simple.  Assume you have a very large Splunk footprint 20+ indexers with 48 physical cores, Search heads with 30 cores, etc...   Do you need to create a 1:1 match in a cloud provider or can you use smaller hardware and scale out to save money?  Physical CPUs and vCPUs aren't equivalent so the hardware for a 1:1 match is more expensive.   When I look at Splunk's proposed Azure architecture they use much smaller VMs for indexers (8 core) and scale out.  Looking for thoughts/advice on whether you would move these 1:1 into cloud or rearchitect for a smaller VM size.  I'm leaning toward rearchitecting as cost is a big component of this.  I'm not sure how to equate how many 48 core indexers (on prem) with smaller 8 core vcpu vms though.  I dont think it would be 6 Azure VMs to 1 48 core physical box.  Any advice/thoughts are appreciated.