All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Dinesh Please check the below docs and if you still have questions please create a case with AppDynamics Support(https://mycase.cloudapps.cisco.com/  https://docs.appdynamics.com/appd... See more...
Hello Dinesh Please check the below docs and if you still have questions please create a case with AppDynamics Support(https://mycase.cloudapps.cisco.com/  https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/ne... https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/de...
@heathramos  Since the p_index macro doesn't exist, here's how to dig into the dashboard and fix it: Edit the dashboard directly: Go to your  dashboard Click the Edit button (top right) Click o... See more...
@heathramos  Since the p_index macro doesn't exist, here's how to dig into the dashboard and fix it: Edit the dashboard directly: Go to your  dashboard Click the Edit button (top right) Click on each panel/visualization that's not showing data Click Edit Search for each one magnifying glass You'll see searches that start with `p_index` sourcetype="pan:xdr_incident" Replace the macro with your actual index: Change `p_index` to index=pan (or whatever your actual Palo Alto index is called) So the search becomes: index=pan sourcetype="pan:xdr_incident" Do this for every search in the dashboard. There are probably 8-10 different searches based on your dashboard config. While you're in there, also check: Does sourcetype="pan:xdr_incident" match your actual data? Run index=pan | stats count by sourcetype first to confirm If your sourcetype is different (like pan:incident or cortex:xdr), update those too  
I see a pan_index macro among others
Hi @heathramos  Okay so this is installed, you should be able to see the macros in them - Are you able to see any of the previously mentioned macros when in the app?  
@livehybrid I knew that you probably knew that but it's worth posting so that the knowledge is spread Yes, the rule about avoiding indexed fields is sound as a general rule of thumb but of course... See more...
@livehybrid I knew that you probably knew that but it's worth posting so that the knowledge is spread Yes, the rule about avoiding indexed fields is sound as a general rule of thumb but of course in some well thought cases indexed fields can bring tremendous gains in search speed. Especially if you're not searching for particular values but just want to get aggregations which you can achieve with tstats. That's one of the valid reasons for using indexed fields sometimes. And yep, due to how they are internally represented, there is no way to distinguish between an indexed field and a simple indexed term with :: in the middle. I haven't worked with AWS logs but I did notice that in other cases.
@PickleRick I absolutely agree, although maybe I kept my justification too short! Having high cardinality in indexed fields could be a big problem, and would not be advised (this annoys me about IND... See more...
@PickleRick I absolutely agree, although maybe I kept my justification too short! Having high cardinality in indexed fields could be a big problem, and would not be advised (this annoys me about INDEXED_EXTRACTION=json) - but indexed fields used well can drastically improved performance if users update searches to use tstats.  Side note - Have you ever noticed that AWS logs often create high cardinality indexed fields for things like S3 objects because of the double :: in the string? e.g. arn:aws:s3:::my-example-bucket/images/photo.jpg becomes arn:aws:s3 with value ":my-example-bucket/images/photo.jpg" and can end up being pretty high cardinality! @nordinethales Has something been raised to your attention that leads you to think that you have large tsidx files which are causing performance issues? Are you defining indexed fields at ingest time for some of your data (e.g. INDEXED_EXTRACTIONS=json or other approaches e.g. INGEST_EVAL) If you're planning to reduce the number of indexed fields then its worth ensuring no searches are currently using them.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Splunk Support can get access to the file system if they don't have, but they no doubt will ask "Why do you *need* to know?".   I, too, wonder why you *need* to know.  Splunk Cloud is a service and ... See more...
Splunk Support can get access to the file system if they don't have, but they no doubt will ask "Why do you *need* to know?".   I, too, wonder why you *need* to know.  Splunk Cloud is a service and how that service is provided shouldn't matter as long as you get what you pay for. What problem are you trying to solve?
Are you curling from the Splunk machine or your endpoint? The site is taking too long to respond happens in two cases: 1) You're not using a proxy server and the server you're trying to reach doesn... See more...
Are you curling from the Splunk machine or your endpoint? The site is taking too long to respond happens in two cases: 1) You're not using a proxy server and the server you're trying to reach doesn't respond to your connection requests - most probably there is something filtering the network traffic between your desktop/laptop and the server. 2) You are using a proxy server and something is filtering the traffic betwen the proxy server and the destination server (or there is no routing between those servers). Filtering is actually quite probable since you're trying to reach the default 8000 port, which is not a standard http(s) port so it might not be allowed in your organization. Most probably you'll need to debug it with your network admin. You can try to verify with tcpdump/wireshark on the splunk server's side whether the initial SYN packet even reaches the server.
Actually, indexing too many fields can have negative impact on performance since you're bloating your tsidx files heavily. Remember that for indexed fields you're creating a separate lexicon entry fo... See more...
Actually, indexing too many fields can have negative impact on performance since you're bloating your tsidx files heavily. Remember that for indexed fields you're creating a separate lexicon entry for each key-value pair for indexed fielda. It might not be that heavy for fields for which number of possible values is low (like true/false, accept/reject/drop and so on) but for indexed extractions from json data where both field names can be long as well as values can be unique long strings - you're growing your lexicon greatly so even when you're using efficient searching methods (I don't know exactly what Splunk uses internally but I'd expect something akin to btrees or radix trees) the amount of data you have to dig through increases due to sheer size of the set you're searching.
Thank you for your response. I am able to access it with curl. curl -v -L http://{domain}}:8000 However I cannot access it from the browsers(I do try multiple browsers). It said taking too lo... See more...
Thank you for your response. I am able to access it with curl. curl -v -L http://{domain}}:8000 However I cannot access it from the browsers(I do try multiple browsers). It said taking too long to respond The firewall seem no issue: 1. ss -ltn | grep 8000 LISTEN 0 128 :8000 2. netstat -tlnp | grep 8000 tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 29947/splunkd
Thanks for you replies, indeed, I was using dbinspect to check buckets size. But I need to have tsidx vs raw data files size. Do you know if somehow splunk support, as they should have access to f... See more...
Thanks for you replies, indeed, I was using dbinspect to check buckets size. But I need to have tsidx vs raw data files size. Do you know if somehow splunk support, as they should have access to file system, could answer to this need? Thanks   Regards
The dbinspect command will give you the sizes of each bucket, but there's nothing I know that will break that down further. To reduce the size of tsidx files, do fewer index-time field extractions (... See more...
The dbinspect command will give you the sizes of each bucket, but there's nothing I know that will break that down further. To reduce the size of tsidx files, do fewer index-time field extractions (especially JSON) and only accelerate the datamodels you use.
You cannot directly access or view tsidx and raw data file sizes in Splunk Cloud, as file system access is restricted. However, you can estimate index storage usage (including tsidx and raw data) usi... See more...
You cannot directly access or view tsidx and raw data file sizes in Splunk Cloud, as file system access is restricted. However, you can estimate index storage usage (including tsidx and raw data) using the dbinspect command.   | dbinspect index=<your_index> | stats sum(rawSize) as total_raw_size sum(sizeOnDiskMB) as total_disk_size | eval total_raw_size_MB=round(total_raw_size/1024/1024,2) | table total_raw_size_MB total_disk_size This provides an estimate of the raw data size and the total disk usage (which includes tsidx and other metadata). I dont think there is anything you can change in Splunk Cloud to reduce tsidx size, but also confused as to why you want to. I'd argue increased number of indexed fields (which would increase tsidx sizes) would *improve* search performance if used with things like tstats.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@livehybrid I tried the query that you suggested to check internal logs for my HF and tweaked key words to see anything related to FMC/Cisco/estreamer. But, it does not show any error logs.
I have configured the Cisco Security Cloud app on the HF because our FMC is not allowed to have any outbound access. As far as the configuration is concerned, I was able to import the cert from FMC a... See more...
I have configured the Cisco Security Cloud app on the HF because our FMC is not allowed to have any outbound access. As far as the configuration is concerned, I was able to import the cert from FMC and save the configuration in the Cisco Security Cloud app. I also created the index on HF as well as cloud instance. But, I don't see any logs from that source into the cloud. I checked the internal logs for the HF and I don't see any errors related to this. I am adding the screenshot from the app configuration on the HF. It does not show the status as  "Connected"     I tried opening a Cisco TAC Case, but as soon as I select the product category to Splunk, it asks me to open a ticket with Splunk support. So, I have been trying to figure out how to contact Cisco Support for the app add-on.   FYI additional info, I also have the Cisco Security Cloud app on the cloud instance, which I am using for integration with another Cisco cloud product which seems to be working fine.   Thank you! Parth
I checked and it was installed but not running the latest version updated the pan app and pan add-on to the latest version but dashboards still don't work
I recognize this is an old post but I ran into this issue today and this is how I solved it. In my case, there was another data model with the same name shared globally in another app (App B). I had ... See more...
I recognize this is an old post but I ran into this issue today and this is how I solved it. In my case, there was another data model with the same name shared globally in another app (App B). I had to change the permissions of the global data model to shared in app (App B), then I could delete the other data model in App A. Once deleted, I changed the permissions for data model in App B back to global. Hope this helps!
Hello, I would like to know if there is a way to see/check tsidx files size and raw data file size. I would like to reduce tsidx file size to improve search performance. Thanks for support Nordin... See more...
Hello, I would like to know if there is a way to see/check tsidx files size and raw data file size. I would like to reduce tsidx file size to improve search performance. Thanks for support Nordine  
Hi @OUnl  If this is in relation to Splunk Education then please email your query to education@splunk.com This is a community forum and you are not guaranteed a response from the correct team. You... See more...
Hi @OUnl  If this is in relation to Splunk Education then please email your query to education@splunk.com This is a community forum and you are not guaranteed a response from the correct team. You may wish to remove your email address from the original post to protect yourself from spam, phishing etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
Hi @heathramos  It sounds like you havent installed Splunk Add-on for Palo Alto Networks  - You need this in addition to the Splunk App for Palo Alto Networks because the Add-on contains all the mac... See more...
Hi @heathramos  It sounds like you havent installed Splunk Add-on for Palo Alto Networks  - You need this in addition to the Splunk App for Palo Alto Networks because the Add-on contains all the macros that the dashboards in the app use, such as p_index, pan_tstats, pan_summariesonly and pan_logs. Please install this and hopefully this should resolve the issue, once installed check the p_index - by default this is "index=pan*" so if your index is called "pan" then the default should be fine.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing