All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all. New here.   So I have been working with some data strings that contain varied asset numbers for computers and servers. As unfortunately our naming conventions are all over the place for a ... See more...
Hi all. New here.   So I have been working with some data strings that contain varied asset numbers for computers and servers. As unfortunately our naming conventions are all over the place for a small number of assets as well as the server asset names being considerably different from endpoint PC's, I have been left with using the following to capture the sequence: (?:Computer Name:)(.*)(?:,)                                  -- (Yucky Wildcard) For Splunk, this would be: | rex "(?<Computer>(?:Computer Name:)(.*?)(?:,))"   The dilemma is that the non-capture group (?:Computer Name) is being captured in the results.   I am unsure but I assume it is due to the first capture group "(?<Computer>)   From my little experience with playing with rex, I do know that non-capture groups work in-front of a capture group but I have had no success in having them before a capture group.   Thanks for listening to my TED talk
I'm getting a lot of "Agent Metric Registration Limit Reached" events in the application dashboard that state that the agents are reaching the max number of metrics that can be registered. I know th... See more...
I'm getting a lot of "Agent Metric Registration Limit Reached" events in the application dashboard that state that the agents are reaching the max number of metrics that can be registered. I know there is a parameter that can be passed to the java agent that can increase this metric limit (-Dappdynamics.agent.maxMetrics=<IntegerNumber>), however I want to determine exactly which metrics are getting counted towards this limit per agent (5000 metrics by default). Recursively using the API, I managed to get the rough estimate of 5020 metric gathered for a particular agent, however I'm not sure if the method I used to calculate this estimate is correct. Concretely, is there a way to determine exactly per agent which are the metrics that count towards the "Agent Metric Registration Limit"? I really need this number so I can set the java parameter appropriately and also know what kind of metrics (business transactions, errors, etc.) have the most impact on this limit. Thanks.
Its been awhile since I setup an props/transforms override, but I never had so much trouble. I have 20 Foo-appliances sending data to a TCP Listener (unique high port) on an HF version 7.3.3. The i... See more...
Its been awhile since I setup an props/transforms override, but I never had so much trouble. I have 20 Foo-appliances sending data to a TCP Listener (unique high port) on an HF version 7.3.3. The inputs.conf for this data is in > /opt/splunk/etc/apps/search/local [tcp://12345] connection_host = dns index = foo sourcetype = foo_log I have 2 Foo-appliances that are sending a different format to the same HF TCP port and I want to send those to a different index=bar sourcetype=bar_log. host 1 = abcd-1234-efgh-blahblah.com host 2 = zyxw-9876-ghyh-blahblah.com I have tried a number of combinations of props and transforms with no luck. The HF does not index the data, just forwards the data to the indexers. Please advise which directory to create an override stanza in the props and transforms, .../system/local or .../search/local  ?   Here is what I tried and had no luck...  Sample props.conf [source::tcp:12345] Transforms-OverRide-Foo-index = OverRide-Foo Transforms-OverRide-Foo-sourcetype = OverRide-Foo_log Sample transforms.conf [OverRide-Foo] REGEX = (abcd* | zyxw* )   DEST_KEY = _MetaData:Index FORMAT = bar [OverRide-Foo_Log] REGEX = (abcd* | zyxw*) DEST_KEY = MetaData:Sourcetype FORMAT = bar_log   I looked at a lot of the examples but I must be misunderstanding. Thanks in advance!
Hi There. I'm fairly new to Splunk reporting.  I have inherited this fairly simple report. index=* Level=ERROR | stats count(index) by index | sort indexTotal desc That just shows a count of error... See more...
Hi There. I'm fairly new to Splunk reporting.  I have inherited this fairly simple report. index=* Level=ERROR | stats count(index) by index | sort indexTotal desc That just shows a count of errors by index.   I need to add another column to the report that just shows the current date. Thanks in advance for any help!
Hi Splunkers, need to keep some sensitive data in index, but hide it for some roles. Is there any way to do this and if not - is it on plans?
Hi Appdynamics, I have a question regarding setting up appdynamics. I was able to install pyagent and run gunicorn using pyagent. I see my flask application is running as usual. I was able to gene... See more...
Hi Appdynamics, I have a question regarding setting up appdynamics. I was able to install pyagent and run gunicorn using pyagent. I see my flask application is running as usual. I was able to generate load. When I checked logs for the proxy, it reported the controller did not respond. python version - 3.6.9 flask = 1.1.1 appdynamics- 20.3.0 My cfg file [agent] app = <app-name> tier = <app-tier> node = node f58e [controller] host = <account>.saas.appdynamics.com port = 443 ssl = (on) account = <account> accesskey = <key> I am attaching a screenshot of the error. Please can anyone help me with this issue?
we are having issues with performance and just noticed that our F5 app is consuming a lot cpu resources:           the attach file is from one of my indexes what can I do?   thank you     ... See more...
we are having issues with performance and just noticed that our F5 app is consuming a lot cpu resources:           the attach file is from one of my indexes what can I do?   thank you                
Does anyone have examples of how to use splunk search to find out bandwidth utilization by top 10 users in GB?
Is there a way in  Rapid7 Nexpose T/A to set-up or configure two unique job names each using a different Nexpose server/IP?  As far as I can tell both inputs will use the same Nexpose server IP.  I c... See more...
Is there a way in  Rapid7 Nexpose T/A to set-up or configure two unique job names each using a different Nexpose server/IP?  As far as I can tell both inputs will use the same Nexpose server IP.  I cannot find where or IF  the server can be updated for one of the inputs. For example: one input (or unique job name)  to ingest data from our on-prem Nexpose server/IP Second input (or unique job name) to ingest data from our external Nexpose server/IP Note: we have two separate Nexpose Consoles.      
I have installed splunk on Linux OS which is bash version. After installation when I have started with the command sudo /bin/sh splunk  start --accept-license It s throwing me a error : /bin/sh:0: ... See more...
I have installed splunk on Linux OS which is bash version. After installation when I have started with the command sudo /bin/sh splunk  start --accept-license It s throwing me a error : /bin/sh:0: cant open splunk U have set the file permissions as well I ve also tried running sudo  ./splunk start --accept-license  Its throwing me an error Unterminated quoted string Please advise how to p
I have this query that matches two types of events, sending a request and receiving an answer. My goal is to take the time both of these happens to see how long the question/answer process takes:   ... See more...
I have this query that matches two types of events, sending a request and receiving an answer. My goal is to take the time both of these happens to see how long the question/answer process takes:   index = "application" ("request sent" OR "answer received") | rex field=_raw ".*\s+:\s+(?<label>\w+).+\s+(?<guid>[a-z\d-]+)$" | eval status=if(label="answer","complete","start") | eval start_time=if(status="complete",null,_time), end_time=if(status="complete",_time,null) | stats min(start_time) as startT, min(end_time) as endT by guid | eval exportTimeInMinutes=abs(end_time-start_time)/60    This query works fine and I use it as a template for others. But the problem I am having is that I want to see a screen stats table which includes the exportTimeInMinutes column. When I first write this query I got back a table with 4 columns: guid, startT, endT and exportTimeInMinutes However, when I come back into the page in a future session I no longer see the last column. Sometimes refreshing the page allows it to show up, other times it does not. Is this a bug (or even worse... a feature)?
Hi, I have installed MongoDB drive from unityjdbc  http://unityjdbc.com/mongojdbc/setup/mongodb_jdbc_splunk_dbconnect_v3.pdf and  setup a test connection. I am getting this error while saving the ... See more...
Hi, I have installed MongoDB drive from unityjdbc  http://unityjdbc.com/mongojdbc/setup/mongodb_jdbc_splunk_dbconnect_v3.pdf and  setup a test connection. I am getting this error while saving the connection. Has anyone seen such error before ? can anyone help?  [dw-171 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: b3695d7a6a8321a8 java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Thread.java:748)
I am trying to split HEC data into multiple sourcetype  based on regex. The Docker platform we are using only provides three inputs for sending data to Splunk this way per group of servers. Host Po... See more...
I am trying to split HEC data into multiple sourcetype  based on regex. The Docker platform we are using only provides three inputs for sending data to Splunk this way per group of servers. Host Port Splunk Token   So I cannot define by token what the sourcetype is. I also have zero access to that interface to play around with it. In short, the data all has to come in on the same token. Now to the question. Referencing various answers on this site, along with the .spec splunk documentation for props.conf and transforms.conf. This one. https://community.splunk.com/t5/Getting-Data-In/How-to-split-a-log-into-multiple-sourcetypes-on-a-heavy/td-p/173635 and https://community.splunk.com/t5/Getting-Data-In/How-to-ingest-logs-from-a-single-file-and-split-them-into-2/td-p/256349   I am trying to, by REGEX pattern, split this one HEC sourcetype "mule_system"  into multiple sourcetypes on my heavy forwarder. I have had some success, but with a weird side-effect. The transforms works, and in a real-time (30sec) search of the data appears in search as two sourcetypes. On a longer running search of this same event, (15m) it reverts back to the sourcetype defined in the inputs.conf for this hec token. "mule_system"   Here are the config files I am using on my heavy forwarder. Keep in mind that more transforms statements will be inserted as we categorize the events into sourcetypes. Inputs.conf   [http://mule_hec] disabled = 0 index = sandbox sourcetype = mule_system token = b7483125-9d88-49b9-bd90-xxxxxxxxxx source = mule_hec     props.conf   [source::mule_hec] #all matches everything for testing #TRANSFORMS-changeSourcetype = mule-app-all TRANSFORMS-changeSourcetype1 = mule-app-setsourcetype TRANSFORMS-changeSourcetype2 = mule-http-request-setsourcetype   transforms.conf   #testing to see if transforms is working at all. [mule-app-all] DEST_KEY = MetaData::Sourcetype REGEX = .*? FORMAT = sourcetype::mule-rtf WRITE_META = true [mule-app-setsourcetype] DEST_KEY = MetaData::Sourcetype REGEX = (LoggerMessageProcessor) FORMAT = sourcetype::mule-app WRITE_META = true [mule-http-request-setsourcetype] DEST_KEY = MetaData::Sourcetype #REGEX = msg=\"http request\" REGEX = user-agent FORMAT = sourcetype::mule-http-request WRITE_META = true      
Hey All, Apologize if this question was previously posted, but I wasn't able to find it. Is there an App for Onelogin for Splunk Cloud? I found the link to Onelogin's doc regarding one (https://onel... See more...
Hey All, Apologize if this question was previously posted, but I wasn't able to find it. Is there an App for Onelogin for Splunk Cloud? I found the link to Onelogin's doc regarding one (https://onelogin.service-now.com/support?id=kb_article&sys_id=8478ba0bdb091c90d5505eea4b961960) but it specifically mentions Splunk Enterprise. I also wasn't to find any OneLogin apps in the built in App catalog in Splunk cloud. Any one have an experience on boarding Onelogin logs into Splunk cloud?   Thanks! Matt
I installed the on-premise AppDynamics 15 day free trial in my lab so I could learn the product, deploy agents, etc. The trial period ended and I received an email that it will transition to the l... See more...
I installed the on-premise AppDynamics 15 day free trial in my lab so I could learn the product, deploy agents, etc. The trial period ended and I received an email that it will transition to the lite version but then digging in further it looks like that might only apply to the SaaS version? Does anyone know if the on-prem version transitions to the lite version? The reason I was asking is that under Licenses all Applications show "not licensed" and I read that there should be at least one of each, albeit limited. https://docs.appdynamics.com/display/PRO45/Lite+and+Pro+Editions
I have a dashboard and at the top I would like to display a horizontal selection for the user, such that they can select report A, report B, or report C.  When they click it, it will open the report ... See more...
I have a dashboard and at the top I would like to display a horizontal selection for the user, such that they can select report A, report B, or report C.  When they click it, it will open the report in a new window. This can be accomplished through 3 checkboxes, radio buttons, HTML link, etc. I'm not sure how to do this.  I thought radio buttons, so I can start them all as unset, then when a user selects one, it unsets the other two and then opens a window with the report but am lost how to accomplish this.  Any pointers would be appreciated.
Hey I am new to splunk and wanted to know how to carry out searches from inside a specific app through python sdk. My search query is for a specific app and does not produce any results if I search ... See more...
Hey I am new to splunk and wanted to know how to carry out searches from inside a specific app through python sdk. My search query is for a specific app and does not produce any results if I search it in global search and reporting. I have established the connection with the server but as the query needs to be searched from inside a specific app I am not able to get any results. Help will be appreciated. Is there an app parameter which is required while connecting to the server      My connection code is  import splunklib.client as client HOST = "hostname" PORT = "8089" USERNAME = "uname" PASSWORD = "passwd" # Create a Service instance and log in service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD ) # Print installed apps to the console to verify login for app in service.apps: print (app.name) kwargs_export = {"earliest_time": "-1h", "latest_time": "now", "search_mode": "normal"} searchquery_export = 'myquery' exportsearch_results = service.jobs.export(searchquery_export, **kwargs_export) # Get the results and display them using the ResultsReader reader = results.ResultsReader(exportsearch_results) for result in reader: if isinstance(result, dict print ("Result: %s" % result) elif isinstance(result, results.Message): # Diagnostic messages may be returned in the results print("Message: %s" % result) # Print whether results are a preview from a running search print ("is_preview = %s " % reader.is_preview)
Hello, My goal is to use javascript to update the "earliest" value of a time picker input to 6 months ago in the event that the user selects a time range with an "earliest" value greater than 6 mon... See more...
Hello, My goal is to use javascript to update the "earliest" value of a time picker input to 6 months ago in the event that the user selects a time range with an "earliest" value greater than 6 months ago.  I don't want a user to be able to extract more than 6 months of data, but at the same time I want them to be able to use the flexibility of the time picker input. So far I have: Create a token "tok_six_months_ago" under <init> tag on the dashboard which is  set to relative_time(now(),"-6mon@mon") Defined a time picker input with name "tok_data_period" Converted the "tok_data_period" earliest and latest values to epoch via the <change> tag <eval token="tok_earliest_epoch">if(isnum('earliest'),'earliest',relative_time(now(),'earliest')</eval> <eval token="tok_latest_epoch">if(isnum('latest'),'latest',relative_time(now(),'latest')</eval> Created a JavaScript file that monitors changes to "tok_earliest_epoch" and alerts a user when it is out of bounds (it checks earliest against "tok_six_months_ago")     def_tok.on("change:tok_earliest_epoch", function() {         if(def_tok.get("tok_earliest_epoch") < def_tok.get("tok_six_months_ago")) {             alert("Lower time limit is out of bounds: data will only be shown for the last 6 months");         }     }); What I would like to do now, but I don't know how, is update the timepicker token on the form (tok_data_period.earliest) with the epoch contained in the "tok_six_months_ago". Please can somebody help? Thanks! Andrew
What is the maximum recommended size for asset/identity lookups? https://dev.splunk.com/enterprise/docs/developapps/enterprisesecurity/assetandidentityframework/  I've had issues with Splunk handli... See more...
What is the maximum recommended size for asset/identity lookups? https://dev.splunk.com/enterprise/docs/developapps/enterprisesecurity/assetandidentityframework/  I've had issues with Splunk handling large numbers of assets and/or identities.  I increased the maximum bundle size to 4GB, but still had to distribute the entire huge bundle every time an identity changed. Is there an option to use a KV store for assets & identities? Or a way to update them with a diff, rather than pushing the entire lookup? Is there a memory requirement for a certain number of assets & identities? Or any related performance impact for having a large number of assets & identities?
Hi all, i want to convert my table to chart, but somehow i can't..  this is my search  result is something like that..  and i want to convert this table to chart (visualization).  wh... See more...
Hi all, i want to convert my table to chart, but somehow i can't..  this is my search  result is something like that..  and i want to convert this table to chart (visualization).  when i click the visualization tab,  as you see there are not values for Y-axis.  Can somebody help me?  Thanks a lot