All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please provide more details about how the index is updated.
so initially my source is a SQL based query. i had modified my query by adding 2 new columns. so i ran my source. the dashboard has 2 reports which is linked to index and source (sql query).Their eve... See more...
so initially my source is a SQL based query. i had modified my query by adding 2 new columns. so i ran my source. the dashboard has 2 reports which is linked to index and source (sql query).Their events are showing 0 from past 6 days. i ran this command |index=<index name> it shows 0 event.  
Please explain your full process as you haven't really provided sufficient information to determine what you are doing, what you changed, what your results were before the change, etc.
ok but i have new columns to be added. if i do so the index stops working. so the data is not forwarding to the indexing.  is there nay option to run my index again?
If that's all you changed, then yes
you mean the SQL query?  
Try changing it back to how it was
@Skeer-Jamf  you got any resolution for this issue
Thanks for the explanation @yuanliu 
Hi, i recently changes a SQL query in Splunk db connect to one of the dashboard. the query ran but i don't see the dashboard getting reflected to new data. as i was checking i see the index did n... See more...
Hi, i recently changes a SQL query in Splunk db connect to one of the dashboard. the query ran but i don't see the dashboard getting reflected to new data. as i was checking i see the index did not refresh after the new query is implemented. The last event of the index remians the day i changed the query. the new query had two new columns but i dont see it getting reflected. can anyone please help me with this. Its bit urgent !!!!!!!!!
In this case, you can update filters like this: gateway:   enabled: true   resources:     requests:       cpu: 100m       memory: 500Mi     limits: memory: 500Mi   replicaCount: 1   config: ... See more...
In this case, you can update filters like this: gateway:   enabled: true   resources:     requests:       cpu: 100m       memory: 500Mi     limits: memory: 500Mi   replicaCount: 1   config:     processors:       filter/filter:         logs:           log_record:           - 'IsMatch(body, ".*bot.*") == false'      service:        pipelines:          logs:            processors:              - filter/filter  This way, when data is coming to the gateway it will be filtered an all log entries with "bot" in the body will be removed. BTW, previous configuration also must be under gateway
Thank you @d_kazakov  for response. I was looking for solution like if a log entry contains a specific string, then that entire log entry should be excluded to push to Splunk indexer. Let me ch... See more...
Thank you @d_kazakov  for response. I was looking for solution like if a log entry contains a specific string, then that entire log entry should be excluded to push to Splunk indexer. Let me check if this solution work in that case or need to alert it.
Hey, dhimanv! I've managed to achieve it. Splunk OnDemand request assisted with this issue. So there are a couple of options, but in my case, these filters worked to cut some fields in the JSON body... See more...
Hey, dhimanv! I've managed to achieve it. Splunk OnDemand request assisted with this issue. So there are a couple of options, but in my case, these filters worked to cut some fields in the JSON body to decrease the amount of GB we ingest: logsCollection: containers: enabled: true useSplunkIncludeAnnotation: true extraOperators: - type: router default: noop-router routes: - expr: body contains "timestamp" and attributes.log matches "^{.*}$" output: remove-nginx-keys - expr: body contains "timestamp" and attributes.log matches "^{.*}\\n$" output: remove-nginx-keys - type: json_parser id: remove-nginx-keys parse_from: attributes.log parse_to: attributes.log - type: remove field: 'attributes.log.cf_ray' on_error: send - type: remove field: 'attributes.log.proxyUpstreamName' on_error: send - type: remove field: 'attributes.log.proxyAlternativeUpstreamName' on_error: send - type: remove field: 'attributes.log.upstreamAddr' on_error: send - type: remove field: 'attributes.log.upstreamStatus' on_error: send - type: remove field: 'attributes.log.requestID' on_error: send - id: noop-router type: noop   So the JSON goes from: {"timestamp": "2023-12-20T10:05:17+00:00", "requestID": "ID", "proxyUpstreamName": "service-name", "proxyAlternativeUpstreamName": "","upstreamStatus": "200", "upstrea mAddr": "IP:4444", "Host": "DNS", "httpRequest":{"requestMethod": "POST", "requestUrl": "/request", "status": 200, "requestSize": "85", "responseSize": "14", "userAgent": "Google", "remoteIp": "IP", "referer": "", "latency": "0.003 s", "protocol": "HTTP/2.0"}, "cf_ray": "1239kvksad2139kc923"}   To:   { [-] Host: web.web.eu httpRequest: { [-] latency: 0.092 s protocol: HTTP/1.1 referer: referer remoteIp: IP requestMethod: GET requestSize: 834 requestUrl: /request responseSize: 133 status: 200 userAgent: agent } timestamp: 2023-12-20T10:05:08+00:00 }     Hope this helps!  
Hello, What are the best methods to ingest Datadog Log and Metrics Data into Splunk Cloud/HF?  We have a requirement to fetch datadog dashboard and populate it to Splunk Dashboard. Thank you. Reg... See more...
Hello, What are the best methods to ingest Datadog Log and Metrics Data into Splunk Cloud/HF?  We have a requirement to fetch datadog dashboard and populate it to Splunk Dashboard. Thank you. Regards, Madhav
Hi, So i have below base query : | inputlookup abc.csv where DECOMMISSIONED=N | fields DATABASE DB_VERSION APP_NAME ACTIVE_DC HOST_NAME DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename DATABASE as Data... See more...
Hi, So i have below base query : | inputlookup abc.csv where DECOMMISSIONED=N | fields DATABASE DB_VERSION APP_NAME ACTIVE_DC HOST_NAME DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename DATABASE as Database | join type=left Database [| metadata type=hosts index=data | fields host, lastTime, totalCount | eval Database=Upper(host)| search totalCount&gt;1 | stats max(lastTime) as lastTime, last(totalCount) as totalCount by Database | eval age=round((now()-lastTime)/3600,1) | eval Status=case( lastTime&gt;(now()-(3600*2)),"Low", lastTime&lt;(now()-(3600*2+1)) AND lastTime&gt;(now()-(3600*8)) ,"Medium", lastTime&lt;(now()-(3600*8+1)) AND lastTime&gt;(now()-(3600*24)),"High", 1=1,"Critical") | convert ctime(lastTime) timeformat="%d-%m-%Y %H:%M:%S" | eval Reference="SPL"] | rex mode=sed field=HOST_NAME "s/\..*$//g" | fields Database Reference DB_VERSION APP_NAME ACTIVE_DC HOST_NAME Status DB_ROLE COMPLIANCE_FLAG | fillnull value=Missing Status | fillnull value=Null Now i need to add field let say Privacy with PII PCI and SOX as filter but i don't need the value of these fields to be come as filter in Privacy filed and reflect same in summary tab  <row> <panel> <table> <title>Summary</title> <search base="base"> <query>| search APP_NAME="$application$" Database="$database$" HOST_NAME="$host$" DB_VERSION="$version$" Status="$status$" COMPLIANCE_FLAG="$compliance$" Privacy="$privacyFilter$" | eval StatusSort=case(Status="Missing","1",Status="Critical","2",Status="High","3",Status="Medium","4",Status="Low","5") | sort StatusSort | table APP_NAME Database HOST_NAME DB_VERSION ACTIVE_DC Status DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename APP_NAME as Application, DB_VERSION as Version, ACTIVE_DC as DC, HOST_NAME as HOST</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="FileSize"> <option name="precision">0</option> </format> <format type="color" field="Status"> <colorPalette type="map">{"Missing":#DC4E41,"Critical":#F1813F,"High":#F8BE34,"Medium":#62B3B2,"Low":#53A051}</colorPalette> </format> </table> </panel> </row> </form>   can someone help how i can get i added this panel <!-- New Privacy Filter Panel --> <input type="multiselect" token="privacyFilter" searchWhenChanged="true"> <label>Privacy</label> <choice value="*">All</choice> <choice value="PII">PII</choice> <choice value="PCI">PCI</choice> <choice value="SOX">SOX</choice> <fieldForLabel>Privacy</fieldForLabel> <fieldForValue>Privacy</fieldForValue> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> and this <row> <panel> <table> <title>Summary</title> <search base="base"> <query>| search APP_NAME="$application$" Database="$database$" HOST_NAME="$host$" DB_VERSION="$version$" Status="$status$" COMPLIANCE_FLAG="$compliance$" Privacy="$privacyFilter$" | eval StatusSort=case(Status="Missing","1",Status="Critical","2",Status="High","3",Status="Medium","4",Status="Low","5") | sort StatusSort | table APP_NAME Database HOST_NAME DB_VERSION ACTIVE_DC Status DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename APP_NAME as Application, DB_VERSION as Version, ACTIVE_DC as DC, HOST_NAME as HOST</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="FileSize"> <option name="precision">0</option> </format> <format type="color" field="Status"> <colorPalette type="map">{"Missing":#DC4E41,"Critical":#F1813F,"High":#F8BE34,"Medium":#62B3B2,"Low":#53A051}</colorPalette> </format> </table> </panel> </row> </form>   but getting no result found 
I am also looking for something like this. Does anyone tried to do this and is that worked?
apologies for the delayed response - I am getting this checked from Couchbase side if raw data there showing the same values. Thank you.
If you have already deployed the CP into services/kpis/correlation searches, neaps, etc, it means they would be existing objects into your ITSI. You can take a ITSI Backup from this environment and r... See more...
If you have already deployed the CP into services/kpis/correlation searches, neaps, etc, it means they would be existing objects into your ITSI. You can take a ITSI Backup from this environment and restore into another deployment (like cloud for example) and check the objects there. Just make sure to adjust the inputs and make sure the lookups and indexes would be there too
You might want to use the outputcsv command.  Like this: | inputlookup itsi_entities_lookup | eval alias=coalesce(title, "N/A") | fields title, alias, fields, service | outputcsv itsi_entities_expo... See more...
You might want to use the outputcsv command.  Like this: | inputlookup itsi_entities_lookup | eval alias=coalesce(title, "N/A") | fields title, alias, fields, service | outputcsv itsi_entities_export.csv
We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu uti... See more...
We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu utilization, High memory utilization incidents. Like till August end we are able to see 100+ host for the mentioned source and sourcetype but after August we are not able to see 100+ host like we could see only 10.15,7  Please help me on this