All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we are not sure how data i stored in IBM ALM and what concept of storage it is, now we are looking for an solution to collect and index data to splunk directly from IBM ALM . do anyone have idea or ... See more...
we are not sure how data i stored in IBM ALM and what concept of storage it is, now we are looking for an solution to collect and index data to splunk directly from IBM ALM . do anyone have idea or solution implemented how to collect data or idea how to approach it?
index IN (A,B) sourcetype IN (A,B) earliest=-12h latest=@m | transaction UUID keepevicted=true | eval ReportKey="Today" | append [search index IN (A,B) sourcetype IN (A,B) earliest=-12h-1w latest=... See more...
index IN (A,B) sourcetype IN (A,B) earliest=-12h latest=@m | transaction UUID keepevicted=true | eval ReportKey="Today" | append [search index IN (A,B) sourcetype IN (A,B) earliest=-12h-1w latest=@m-1w | transaction UUID keepevicted=true | eval ReportKey="LastWeek" | eval _time=_time+60*60*24*7]  | timechart span=30m count(linecount) as Volume by ReportKey | fields _time,Today,LastWeek as this search taking more time to load so i am trying to modify the search can you please me with this. Thanks in advance Veerendra
Hello, I am trying to format multi-value cell data in a dashboard table using mvmap in an eval token before passing it on to a drilldown, however I am unable to figure out how to format the eval fun... See more...
Hello, I am trying to format multi-value cell data in a dashboard table using mvmap in an eval token before passing it on to a drilldown, however I am unable to figure out how to format the eval function and if this approach would work at all. I would appreciate if someone could tell me why this function fails. I have included a test dashboard which shows sample data (sample column) and the format that I would like to create (test column). Unfortunately, the 'temptoken' token never gets evaluated. Note, I understand that I could use different workarounds to avoid using mvmap in an eval token, such as creating a hidden field in the table and use it for drilldown, or using different eval functions (depending on the use case). I am specifically interested in the format of using mvmap in an eval token, as this function could be really useful in more complex cases that I have to deal with. <dashboard> <label>mvmap in eval token</label> <row> <panel> <table> <search> <query> <![CDATA[ | makeresults | fields - _time | eval sample = "text1 -> text2,text3 -> text4" | eval sample = split(sample, ",") ``` the SPL above this line will generate the sample data ``` | eval test = mvmap(sample, split(sample, " -> ")) ]]> </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <condition match="$click.name2$==&quot;sample&quot;"> <!-- This eval function is not working --> <eval token="temptoken">mvmap('row.sample', split('row.sample', " -> "))</eval> </condition> <condition match="$click.name2$==&quot;test&quot;"> <eval token="temptoken2">'row.test'</eval> </condition> </drilldown> </table> </panel> </row> <row> <html> <p> temptoken: $temptoken$ </p> <p> temptoken2: $temptoken2$ </p> </html> </row> </dashboard>   Best Regards, Robert
Hi,   I have been trying to deploy the Enterprise Security 7 days free trial Sandbox for days now without success.   Each time I attempt to subscribe, i get the following error   Proxy Error T... See more...
Hi,   I have been trying to deploy the Enterprise Security 7 days free trial Sandbox for days now without success.   Each time I attempt to subscribe, i get the following error   Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request Reason: DNS lookup failure for: uw2-iteng-prd-ss-cf-cloud-trial-1821109459.us-west-2.elb.amazonaws.com   Additionally, a 502 Bad Gateway error was encountered while trying to use an ErrorDocument to handle the request.   I dont know why this is happening.    can someone please advice me on what to do?
Can I manage summary index gaps? my scheduled searches missed and now I need to gap data on my summary index
Hi, I have a dashboard that uses a base search for all the panels. when I run the base search outside of the dashboard - it takes like 7 seconds to complete. but when I open the dashboard - the pa... See more...
Hi, I have a dashboard that uses a base search for all the panels. when I run the base search outside of the dashboard - it takes like 7 seconds to complete. but when I open the dashboard - the panels are completely loaded only after 1 minute or more. The dashboard has many panels and many filters and tokens - could it affect the slow performance?  how can I improve the performance?   Thanks.  
Hi All, I have done a index search for disk data and then lookup to the CSV to check as per the Application which servers data need to be displayed in the dashboard panel. can some one suggest me h... See more...
Hi All, I have done a index search for disk data and then lookup to the CSV to check as per the Application which servers data need to be displayed in the dashboard panel. can some one suggest me how to get the application data in CSV as per Application and then pull the disk performance data from the index. Please suggest. as I am do the below. but not able to use the sv_value in index search. | inputlookup Server_details.csv | search Application="app name" | stats dc(Server) as "Count of Server", values(Server) as Server by Application | eval Server = mvjoin(Server, " OR ") | stats values(Server) as sv_value Please suggest. Regards, Nayan
I added two new indexers to our 10-indexer "cluster" (we have replication factor of 1 so I'm using the quotes, because it's really more of a simple distributed search setup; but we have master node a... See more...
I added two new indexers to our 10-indexer "cluster" (we have replication factor of 1 so I'm using the quotes, because it's really more of a simple distributed search setup; but we have master node and we can rebalance so it counts as a cluster ;-)) and I ran rebalance so the data would get redistributed across whole environment. And now I'm a bit puzzled. Firstly, the 2 new indexers are stressed with datamodel acceleration. Why is it so? I would understand if all indexers needed to re-accelerate the datamodel but only those two? (I wouldn't be very happy if I had to re-accelerate my TB-sized indexes but I'd understand). I did indeed start the rebalancing around 16:30 yesterday. Secondly - I can't really undersand some of the effects of the rebalancing. It seems that even after rebalancing the indexers aren't really well-balanced. Example: The 9th one is the new indexer. I see that it has 66 buckets so some of the buckets were moved to that server but I have no idea why the average bucket size is so low on this one. And this is quite consistent across all the indexes - the numbers of buckets are relatively similar across the deployment but the bucket sizes on the two new indexers are way lower than on the rest. The indexes config (and most of the indexers' config) is of course pushed from the master node so there should be no significant difference (I'll do a recheck with btool anyway). And the third one is that I don't know why the disk usage is so inconsistent across various reporting methods. | rest /services/data/indexes splunk_server=*in* | stats sum(currentDBSizeMB) as totalDBSize by splunk_server  Gives me about 1.3-1.5T for the new indexers whereas df on the server shows about 4.5T of used space. OK. I correlated it with | dbinspect index=* | search splunk_server=*in* | stats sum(rawSize) sum(sizeOnDiskMB) by splunk_server  And it seems that REST call gives the size of raw size, not of the summarized data size. But then again, the dbinspect shows that: 1) Old indexers have around 2.2 TB of sum(rawSize) whereas new ones have around 1.3T. 2) Old indexers have 6.5TB of sum(sizeOnDiskMB), new ones - 4.5T 3) On new indexers the 4.5T is quite consistent with the usage reported by df. On old ones there is about 1T used "extra" on filesystems. Is it due to some unused but not yet deleted data? Can I identify where it's located and clean it up?
Hi, I have distrubuted setup.  Each site has 1 indexer /1 search head & 1 master server. They are part of cluster. One of the indexer from other site was down for more than 48 hours. Now indexer ... See more...
Hi, I have distrubuted setup.  Each site has 1 indexer /1 search head & 1 master server. They are part of cluster. One of the indexer from other site was down for more than 48 hours. Now indexer is up . On master server i can see data durability as red. How do i fix the issue now ? Regards, Nilupat
One of the universal forwarder is connected to another team deployment server, so we need to connect to our deployment server now and we don't want to use the another team DS.  I am not able to see ... See more...
One of the universal forwarder is connected to another team deployment server, so we need to connect to our deployment server now and we don't want to use the another team DS.  I am not able to see any configuration related to that in deploymentclient.conf  What is the proper sintax for use this?
Getting this issue on a windows server. There's only an inputs.conf file with the following  [monitor://L:\Logs\ApplicationLogs*.log] sourcetype = xxx index = yy disabled = 0   1/12/2022 ... See more...
Getting this issue on a windows server. There's only an inputs.conf file with the following  [monitor://L:\Logs\ApplicationLogs*.log] sourcetype = xxx index = yy disabled = 0   1/12/2022 05:03.3 1000 ERROR WatchedFile [7088 tailreader0] -   Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.         1/12/2022 05:03.3 1000 ERROR TailReader [7088 tailreader0] - Ignoring path="L:\Logs\ApplicationLogs20220112VServer.log" due to:   Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.          
Hi there, I would like to monitor indexes that have not been active for more than 24hrs+ and display the names of the indexes in a table as well as the last received activity. Thanks
Hello Splunkers!  I am trying to find a way to set up a cron schedule on DB connect app?  I want to run the schedule every second week of Tuesday of the month.  So, the next schedule should be 02/... See more...
Hello Splunkers!  I am trying to find a way to set up a cron schedule on DB connect app?  I want to run the schedule every second week of Tuesday of the month.  So, the next schedule should be 02/08/2022, then 03/08/2022, then 04/05/2022 .... We tried super hard to make this work.  Below is what we tried:   I thought this would work then, it shows the next schedule will be on 1/18 not next month.    Any brilliant ideas our Splunkers have??   Thanks in advanced. 
Is there a way to make a call python script from a dashboard and display the received output data from the script into the dashboard. In reality I am not looking into diggesting this data into splunk... See more...
Is there a way to make a call python script from a dashboard and display the received output data from the script into the dashboard. In reality I am not looking into diggesting this data into splunk but only display on demand when looking at the dashboard.  
https://docs.splunk.com/Documentation/Splunk/latest/admin/savedsearchesconf mentions two lookup-generating actions: action.lookup and action.populate_lookup. Some of the differences are clear, tho... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/savedsearchesconf mentions two lookup-generating actions: action.lookup and action.populate_lookup. Some of the differences are clear, though not explicitly listed, in the docs. What's the complete set of differences? When should I use one or the other and when do I have to use outputlookup? action.lookup = <boolean> * Specifies whether the lookup action is enabled for this search. * Default: false action.lookup.filename = <lookup filename> action.lookup.append = <boolean> and action.populate_lookup = <boolean> * Specifies whether the lookup population action is enabled for this search. * Default: false action.populate_lookup.dest = <string> run_on_startup = <boolean> run_n_times = <unsigned integer>   
Hi community , I am trying to filter out some undesired traffic from a particular index. I read about the option using props.conf and transforms.conf.  The query matching the traffic that I don't w... See more...
Hi community , I am trying to filter out some undesired traffic from a particular index. I read about the option using props.conf and transforms.conf.  The query matching the traffic that I don't want, looks like this:  index=abc sourcetype=abc_traffic dest_ip=255.255.255.255 The index abc is located in the search App.  So, I went to my Search Head -> opt/splunk/etc/apps/search/local and modified the props.conf with the following:  [abc_traffic] TRANSFORMS-null= broadcast-null Then, I created a TRANSFORMS.conf file in the same directory with the following entry:  [broadcast-null] REGEX= dest_ip= 255.255.255.255 DEST_KEY= queue FORMAT= nullQueue Restarted splunk I am not sure if I am doing something wrong, maybe I am using the wrong location or format, not sure, I don't have too much experience managing Splunk.  Appreciated any help! 
I am getting performance errors on the ES reg. many indexes used by users, specially the admin role. Any SPLs or direction is much appreciated.
I'm trying to have Splunk submit two separate events in one run .   def run(): logging.info("Running Test....") now = time.time() output = f""" <stream> <event> <time>{now}</time> <data>event_stat... See more...
I'm trying to have Splunk submit two separate events in one run .   def run(): logging.info("Running Test....") now = time.time() output = f""" <stream> <event> <time>{now}</time> <data>event_status="(0)Item0."</data> </event> <event> <time>{now}</time> <data>event_status="(1)Item1."</data> </event> </stream> """ print(output) sys.stdout.flush()     This runs and the XML is submitted but it only shows as a single event     1/12/22 9:47:54.000 PM <stream> <event> <time>1642024074.8583786</time> <data>event_status="(0)Item0."</data> </event> <event> <time>1642024074.8583786</time> <data>event_status="(1)Item1."</data> </event> </stream>     Is there any way to submit these two events so they show up as separate events?  I'm looking at polling multiple statistics for a multitenant application and would like to display each tenant separately.
I've been able to run a dashboard from the command line by  1. copy and pasting simple xml into a file 2. updating tokens with desired values 3. running the pdf render command from curl, e.g., cur... See more...
I've been able to run a dashboard from the command line by  1. copy and pasting simple xml into a file 2. updating tokens with desired values 3. running the pdf render command from curl, e.g., curl  -sku guest:pwd "https://splunkhost:8089/services/pdfgen/render" --data-urlencode "input-dashboard-xml=$(cat sample-dashboard.xml)" -d namespace=search -d paper-size=a4-landscape > mydash.pdf Is there a way to use python/rest to do the same? I tried some of the endpoints and it creates the dashboard xml with extra scaffolding. Seems to be intended for adding and updating dashboards rather than running the dashboard itself.
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_... See more...
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_bytes (in bytes) but we want to do some calculations to get the memory usage percentage per node.